Data Wrangling with Pandas

Python is a terrific platform for statistical data analysis partly because of the features of the language itself, but also because of a rich suite of 3rd party packages that provide robust and flexible data structures, efficient implementations of mathematical and statistical functions, and facitities for generating publication-quality graphics. Pandas is at the top of the "scientific stack", because it allows data to be imported, manipulated and exported so easily. In contrast, NumPy supports the bottom of the stack with fundamental infrastructure for array operations, mathematical calculations, and random number generation.

We will cover both of these in some detail before getting down to the business of analyzing data.


In [1]:
%matplotlib inline
import pandas as pd
import numpy as np

# Set some Pandas options
pd.set_option('html', False)
pd.set_option('max_columns', 30)
pd.set_option('max_rows', 20)

NumPy

The most fundamental third-party package for scientific computing in Python is NumPy, which provides multidimensional array data types, along with associated functions and methods to manipulate them. While Python comes with several container types (list,tuple,dict), NumPy's arrays are implemented closer to the hardware, and are therefore more efficient than the built-in types.

Basics of Numpy arrays

The main object provided by numpy is a powerful array. We'll start by exploring how the numpy array differs from Python lists. We start by creating a simple list and an array with the same contents of the list:


In [2]:
a_list = range(1000)
an_array = np.arange(1000)

This is what the array looks like:


In [3]:
an_array[:10]


Out[3]:
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

In [4]:
type(an_array)


Out[4]:
numpy.ndarray

In [5]:
timeit [i**2 for i in a_list]


10000 loops, best of 3: 115 µs per loop

In [6]:
timeit an_array**2


100000 loops, best of 3: 1.95 µs per loop

Elements of a one-dimensional array are indexed with square brackets, as with lists:


In [7]:
an_array[5:10]


Out[7]:
array([5, 6, 7, 8, 9])

The first difference to note between lists and arrays is that arrays are homogeneous; i.e. all elements of an array must be of the same type. In contrast, lists can contain elements of arbitrary type. For example, we can change the last element in our list above to be a string:


In [8]:
a_list[0] = 'a string inside a list'
a_list[:10]


Out[8]:
['a string inside a list', 1, 2, 3, 4, 5, 6, 7, 8, 9]

In [9]:
an_array[0] = 'a string inside an array'


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-9-c0f4ea2ca15d> in <module>()
----> 1 an_array[0] = 'a string inside an array'

ValueError: invalid literal for long() with base 10: 'a string inside an array'

The information about the type of an array is contained in its dtype attribute:


In [ ]:
an_array.dtype

Once an array has been created, its dtype is fixed and it can only store elements of the same type. For this example where the dtype is integer, if we store a floating point number it will be automatically converted into an integer:


In [10]:
an_array[0] = 1.234
an_array[:10]


Out[10]:
array([1, 1, 2, 3, 4, 5, 6, 7, 8, 9])

The linspace and logspace functions to create linearly and logarithmically-spaced grids respectively, with a fixed number of points and including both ends of the specified interval:


In [11]:
np.linspace(0, 1, num=5)


Out[11]:
array([ 0.  ,  0.25,  0.5 ,  0.75,  1.  ])

In [12]:
np.logspace(1, 4, num=4)


Out[12]:
array([    10.,    100.,   1000.,  10000.])

It is often useful to create arrays with random numbers that follow a specific distribution. The np.random module contains a number of functions that can be used to this effect, for example this will produce an array of 5 random samples taken from a standard normal distribution (0 mean and variance 1):


In [13]:
np.random.randn(5)


Out[13]:
array([ 0.23384357,  0.42443595,  0.61751552, -0.2040506 , -0.97145269])

whereas the following will also give 5 samples, but from a normal distribution with a mean of 10 and a variance of 3:


In [14]:
norm_10 = np.random.normal(loc=10, scale=3, size=10)
norm_10


Out[14]:
array([  7.40864972,   5.28860042,  11.4584049 ,  13.22996099,
        16.2038088 ,   7.79999289,  15.28090008,   9.23417946,
         7.68310883,  11.72980755])

Indexing with other arrays

Above we saw how to index arrays with single numbers and slices, just like Python lists. But arrays allow for a more sophisticated kind of indexing which is very powerful: you can index an array with another array, and in particular with an array of boolean values. This is particluarly useful to extract information from an array that matches a certain condition.

Consider for example that in the array norm10 we want to replace all values above 9 with the value 0. We can do so by first finding the mask that indicates where this condition is True or False:


In [15]:
mask = norm_10 > 9
mask


Out[15]:
array([False, False,  True,  True,  True, False,  True,  True, False,  True], dtype=bool)

Now that we have this mask, we can use it to either read those values or to reset them to 0:


In [16]:
norm_10[mask]


Out[16]:
array([ 11.4584049 ,  13.22996099,  16.2038088 ,  15.28090008,
         9.23417946,  11.72980755])

In [17]:
norm_10[mask] = 0
print norm_10


[ 7.40864972  5.28860042  0.          0.          0.          7.79999289
  0.          0.          7.68310883  0.        ]

In [18]:
norm_10[np.nonzero(norm_10)]


Out[18]:
array([ 7.40864972,  5.28860042,  7.79999289,  7.68310883])

Multidimensional Arrays

Numpy can create arrays of aribtrary dimensions, and all the methods illustrated in the previous section work with more than one dimension. For example, a list of lists can be used to initialize a two dimensional array:


In [19]:
array_2d = np.array([[1, 2], [3, 4]])
array_2d.shape


Out[19]:
(2, 2)

With two-dimensional arrays we start seeing the power of numpy: while a nested list can be indexed using repeatedly the [ ] operator, multidimensional arrays support a much more natural indexing syntax with a single [ ] and a set of indices separated by commas:


In [20]:
array_2d[0,1]


Out[20]:
2

The shape of an array can be changed at any time, as long as the total number of elements is unchanged. For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is via the numpy array's reshape method.


In [21]:
md_array = np.arange(8).reshape(2,4)
print md_array


[[0 1 2 3]
 [4 5 6 7]]

With multidimensional arrays, you can also use slices, and you can mix and match slices and single indices in the different dimensions (using the same array as above):


In [22]:
md_array[1, 2:4]


Out[22]:
array([6, 7])

In [23]:
md_array[:, 2]


Out[23]:
array([2, 6])

If you only provide one index, then you will get the corresponding row.


In [24]:
md_array[1]


Out[24]:
array([4, 5, 6, 7])

Arrays have a slew of useful attributes and methods:


In [25]:
md_array.dtype


Out[25]:
dtype('int64')

In [26]:
md_array.shape


Out[26]:
(2, 4)

In [27]:
md_array.ndim


Out[27]:
2

In [28]:
md_array.nbytes


Out[28]:
64

In [29]:
md_array.min(), md_array.max()


Out[29]:
(0, 7)

In [30]:
md_array.sum(), md_array.prod()


Out[30]:
(28, 0)

In [31]:
md_array.mean(), md_array.std()


Out[31]:
(3.5, 2.2912878474779199)

Arrays may be summarized along specified axes:


In [32]:
md_array.sum(axis=0)


Out[32]:
array([ 4,  6,  8, 10])

In [33]:
md_array.sum(axis=1)


Out[33]:
array([ 6, 22])

Or, more generally:


In [34]:
random_array = np.random.random((3,2,3,4))
random_array


Out[34]:
array([[[[ 0.46830226,  0.32731366,  0.23317943,  0.22991289],
         [ 0.20261665,  0.93655274,  0.17941865,  0.99712239],
         [ 0.5154546 ,  0.69956186,  0.3515375 ,  0.89186765]],

        [[ 0.37951754,  0.87061811,  0.52325301,  0.19753693],
         [ 0.716204  ,  0.0015645 ,  0.17882149,  0.7549386 ],
         [ 0.89425408,  0.41199676,  0.29458076,  0.27091564]]],


       [[[ 0.78263062,  0.1324776 ,  0.867391  ,  0.93512634],
         [ 0.40886267,  0.83681014,  0.62876166,  0.26180909],
         [ 0.17686583,  0.30163449,  0.48994319,  0.5443208 ]],

        [[ 0.79109438,  0.85315179,  0.27670091,  0.95031492],
         [ 0.40366678,  0.08635273,  0.49408251,  0.83502123],
         [ 0.97103849,  0.00309119,  0.45505554,  0.53595346]]],


       [[[ 0.85388212,  0.54293366,  0.40702197,  0.02730233],
         [ 0.15515379,  0.5897746 ,  0.87678707,  0.611536  ],
         [ 0.5585089 ,  0.94662013,  0.29901721,  0.47758475]],

        [[ 0.54216565,  0.22196223,  0.01673971,  0.65437799],
         [ 0.80744884,  0.19855128,  0.75430624,  0.83723579],
         [ 0.4197388 ,  0.01611553,  0.44570697,  0.62499116]]]])

In [35]:
random_array.sum(2).shape


Out[35]:
(3, 2, 4)

NumPy arrays support all standard arithmetic operations, which are typically applied element-wise.


In [36]:
first_array = np.random.randn(4)
second_array = np.random.randn(4)

first_array, second_array


Out[36]:
(array([ 0.83038148, -0.82847091,  1.23465502,  1.05221313]),
 array([ 0.15413213, -0.52053375, -0.78537127, -0.51907577]))

In [37]:
first_array * second_array


Out[37]:
array([ 0.12798847,  0.43124707, -0.96966258, -0.54617834])

When operating on scalars (zero-dimensional objects), broadcasting is used to apply the operation to each element:


In [38]:
first_array * 5


Out[38]:
array([ 4.15190741, -4.14235457,  6.17327509,  5.26106567])

Broadcasting also works for multidimensional arrays:


In [39]:
md_array


Out[39]:
array([[0, 1, 2, 3],
       [4, 5, 6, 7]])

In [40]:
md_array * first_array


Out[40]:
array([[ 0.        , -0.82847091,  2.46931004,  3.1566394 ],
       [ 3.32152593, -4.14235457,  7.40793011,  7.36549194]])

In the above, NumPy compares the trailing dimensions of each array, and adds dimsnsions of length 1 for the remaining dimensions, before multiplying. Hence, the following will not work:


In [41]:
md_array * np.array([-1, 2.3])


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-41-915241ee4d97> in <module>()
----> 1 md_array * np.array([-1, 2.3])

ValueError: operands could not be broadcast together with shapes (2,4) (2,) 

This can be made to work either by "injecting" an additional axis, or by transposing the first array:


In [42]:
md_array * np.array([-1, 2.3])[:, np.newaxis]


Out[42]:
array([[ -0. ,  -1. ,  -2. ,  -3. ],
       [  9.2,  11.5,  13.8,  16.1]])

In [43]:
md_array.T * np.array([-1, 2.3])


Out[43]:
array([[ -0. ,   9.2],
       [ -1. ,  11.5],
       [ -2. ,  13.8],
       [ -3. ,  16.1]])

Some may have predicted the multiply operator to perform matrix multiplication on two array arguments, rather than element-wise multiplication. NumPy includes a linear algebra library, and matrix mutliplication can be carried out using the dot (i.e. dot product) function or method:


In [44]:
md_array.dot(first_array)


Out[44]:
array([  4.79747853,  13.95259341])

In [45]:
np.dot(md_array, first_array)


Out[45]:
array([  4.79747853,  13.95259341])

Introduction to Pandas

pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.

pandas is well suited for:

  • Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
  • Ordered and unordered (not necessarily fixed-frequency) time series data.
  • Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
  • Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure

Key features:

  • Easy handling of missing data
  • Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
  • Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically
  • Powerful, flexible group by functionality to perform split-apply-combine operations on data sets
  • Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
  • Intuitive merging and joining data sets
  • Flexible reshaping and pivoting of data sets
  • Hierarchical labeling of axes
  • Robust IO tools for loading data from flat files, Excel files, databases, and HDF5
  • Time series functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.

In [46]:
from IPython.core.display import HTML
HTML("<iframe src=http://pandas.pydata.org width=800 height=350></iframe>")


Out[46]:

Pandas Data Structures

Series

A Series is a single vector of data (like a NumPy array) with an index that labels each element in the vector.


In [47]:
counts = pd.Series([632, 1638, 569, 115])
counts


Out[47]:
0     632
1    1638
2     569
3     115
dtype: int64

If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object.


In [48]:
counts.values


Out[48]:
array([ 632, 1638,  569,  115])

In [49]:
counts.index


Out[49]:
Int64Index([0, 1, 2, 3], dtype='int64')

We can assign meaningful labels to the index, if they are available:


In [50]:
bacteria = pd.Series([632, 1638, 569, 115], 
    index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'])

bacteria


Out[50]:
Firmicutes         632
Proteobacteria    1638
Actinobacteria     569
Bacteroidetes      115
dtype: int64

These labels can be used to refer to the values in the Series.


In [51]:
bacteria['Actinobacteria']


Out[51]:
569

In [52]:
bacteria[[name.endswith('bacteria') for name in bacteria.index]]


Out[52]:
Proteobacteria    1638
Actinobacteria     569
dtype: int64

In [53]:
[name.endswith('bacteria') for name in bacteria.index]


Out[53]:
[False, True, True, False]

Notice that the indexing operation preserved the association between the values and the corresponding indices.

We can still use positional indexing if we wish.


In [54]:
bacteria[0]


Out[54]:
632

We can give both the array of values and the index meaningful labels themselves:


In [55]:
bacteria.name = 'counts'
bacteria.index.name = 'phylum'
bacteria


Out[55]:
phylum
Firmicutes         632
Proteobacteria    1638
Actinobacteria     569
Bacteroidetes      115
Name: counts, dtype: int64

NumPy's math functions and other operations can be applied to Series without losing the data structure.


In [56]:
np.log(bacteria)


Out[56]:
phylum
Firmicutes        6.448889
Proteobacteria    7.401231
Actinobacteria    6.343880
Bacteroidetes     4.744932
Name: counts, dtype: float64

We can also filter according to the values in the Series:


In [57]:
bacteria[bacteria>1000]


Out[57]:
phylum
Proteobacteria    1638
Name: counts, dtype: int64

A Series can be thought of as an ordered key-value store. In fact, we can create one from a dict:


In [58]:
bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569, 'Bacteroidetes': 115}
pd.Series(bacteria_dict)


Out[58]:
Actinobacteria     569
Bacteroidetes      115
Firmicutes         632
Proteobacteria    1638
dtype: int64

Notice that the Series is created in key-sorted order.

If we pass a custom index to Series, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the NaN (not a number) type for missing values.


In [59]:
bacteria2 = pd.Series(bacteria_dict, index=['Cyanobacteria','Firmicutes','Proteobacteria','Actinobacteria'])
bacteria2


Out[59]:
Cyanobacteria      NaN
Firmicutes         632
Proteobacteria    1638
Actinobacteria     569
dtype: float64

In [60]:
bacteria2.isnull()


Out[60]:
Cyanobacteria      True
Firmicutes        False
Proteobacteria    False
Actinobacteria    False
dtype: bool

Critically, the labels are used to align data when used in operations with other Series objects:


In [61]:
bacteria + bacteria2


Out[61]:
Actinobacteria    1138
Bacteroidetes      NaN
Cyanobacteria      NaN
Firmicutes        1264
Proteobacteria    3276
dtype: float64

Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.

DataFrame

Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type.

A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame allows us to represent and manipulate higher-dimensional data.


In [62]:
data = pd.DataFrame({'value':[632, 1638, 569, 115, 433, 1130, 754, 555],
                     'patient':[1, 1, 1, 1, 2, 2, 2, 2],
                     'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria', 
    'Bacteroidetes', 'Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']})
data


Out[62]:
   patient          phylum  value
0        1      Firmicutes    632
1        1  Proteobacteria   1638
2        1  Actinobacteria    569
3        1   Bacteroidetes    115
4        2      Firmicutes    433
5        2  Proteobacteria   1130
6        2  Actinobacteria    754
7        2   Bacteroidetes    555

[8 rows x 3 columns]

Notice the DataFrame is sorted by column name. We can change the order by indexing them in the order we desire:


In [63]:
data[['phylum','value','patient']]


Out[63]:
           phylum  value  patient
0      Firmicutes    632        1
1  Proteobacteria   1638        1
2  Actinobacteria    569        1
3   Bacteroidetes    115        1
4      Firmicutes    433        2
5  Proteobacteria   1130        2
6  Actinobacteria    754        2
7   Bacteroidetes    555        2

[8 rows x 3 columns]

A DataFrame has a second index, representing the columns:


In [64]:
data.columns


Out[64]:
Index([u'patient', u'phylum', u'value'], dtype='object')

If we wish to access columns, we can do so either by dict-like indexing or by attribute:


In [65]:
data['value']


Out[65]:
0     632
1    1638
2     569
3     115
4     433
5    1130
6     754
7     555
Name: value, dtype: int64

In [66]:
data.value


Out[66]:
0     632
1    1638
2     569
3     115
4     433
5    1130
6     754
7     555
Name: value, dtype: int64

In [67]:
type(data.value)


Out[67]:
pandas.core.series.Series

In [68]:
type(data[['value']])


Out[68]:
pandas.core.frame.DataFrame

Notice this is different than with Series, where dict-like indexing retrieved a particular element (row). If we want access to a row in a DataFrame, we index its ix attribute.


In [69]:
data.ix[3]


Out[69]:
patient                1
phylum     Bacteroidetes
value                115
Name: 3, dtype: object

Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data:


In [70]:
vals = data.value
vals


Out[70]:
0     632
1    1638
2     569
3     115
4     433
5    1130
6     754
7     555
Name: value, dtype: int64

In [71]:
vals[5] = 0
vals


Out[71]:
0     632
1    1638
2     569
3     115
4     433
5       0
6     754
7     555
Name: value, dtype: int64

In [72]:
data


Out[72]:
   patient          phylum  value
0        1      Firmicutes    632
1        1  Proteobacteria   1638
2        1  Actinobacteria    569
3        1   Bacteroidetes    115
4        2      Firmicutes    433
5        2  Proteobacteria      0
6        2  Actinobacteria    754
7        2   Bacteroidetes    555

[8 rows x 3 columns]

In [73]:
vals = data.value.copy()
vals[5] = 1000
data


Out[73]:
   patient          phylum  value
0        1      Firmicutes    632
1        1  Proteobacteria   1638
2        1  Actinobacteria    569
3        1   Bacteroidetes    115
4        2      Firmicutes    433
5        2  Proteobacteria      0
6        2  Actinobacteria    754
7        2   Bacteroidetes    555

[8 rows x 3 columns]

We can create or modify columns by assignment:


In [74]:
data.value[3] = 14
data


Out[74]:
   patient          phylum  value
0        1      Firmicutes    632
1        1  Proteobacteria   1638
2        1  Actinobacteria    569
3        1   Bacteroidetes     14
4        2      Firmicutes    433
5        2  Proteobacteria      0
6        2  Actinobacteria    754
7        2   Bacteroidetes    555

[8 rows x 3 columns]

In [75]:
data['year'] = 2013
data


Out[75]:
   patient          phylum  value  year
0        1      Firmicutes    632  2013
1        1  Proteobacteria   1638  2013
2        1  Actinobacteria    569  2013
3        1   Bacteroidetes     14  2013
4        2      Firmicutes    433  2013
5        2  Proteobacteria      0  2013
6        2  Actinobacteria    754  2013
7        2   Bacteroidetes    555  2013

[8 rows x 4 columns]

But note, we cannot use the attribute indexing method to add a new column:


In [76]:
data.treatment = 1
data


Out[76]:
   patient          phylum  value  year
0        1      Firmicutes    632  2013
1        1  Proteobacteria   1638  2013
2        1  Actinobacteria    569  2013
3        1   Bacteroidetes     14  2013
4        2      Firmicutes    433  2013
5        2  Proteobacteria      0  2013
6        2  Actinobacteria    754  2013
7        2   Bacteroidetes    555  2013

[8 rows x 4 columns]

In [77]:
data.treatment


Out[77]:
1

Specifying a Series as a new columns cause its values to be added according to the DataFrame's index:


In [78]:
treatment = pd.Series([0]*4 + [1]*2)
treatment


Out[78]:
0    0
1    0
2    0
3    0
4    1
5    1
dtype: int64

In [79]:
data['treatment'] = treatment
data


Out[79]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013        NaN
7        2   Bacteroidetes    555  2013        NaN

[8 rows x 5 columns]

Other Python data structures (ones without an index) need to be the same length as the DataFrame:


In [80]:
month = ['Jan', 'Feb', 'Mar', 'Apr']
data['month'] = month


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-80-360d03fdde9a> in <module>()
      1 month = ['Jan', 'Feb', 'Mar', 'Apr']
----> 2 data['month'] = month

/Users/fonnescj/.virtualenvs/pymc2/lib/python2.7/site-packages/pandas-0.13.1_213_gc174c3d-py2.7-macosx-10.9-intel.egg/pandas/core/frame.pyc in __setitem__(self, key, value)
   1894         else:
   1895             # set column
-> 1896             self._set_item(key, value)
   1897 
   1898     def _setitem_slice(self, key, value):

/Users/fonnescj/.virtualenvs/pymc2/lib/python2.7/site-packages/pandas-0.13.1_213_gc174c3d-py2.7-macosx-10.9-intel.egg/pandas/core/frame.pyc in _set_item(self, key, value)
   1974         is_existing = key in self.columns
   1975         self._ensure_valid_index(value)
-> 1976         value = self._sanitize_column(key, value)
   1977         NDFrame._set_item(self, key, value)
   1978 

/Users/fonnescj/.virtualenvs/pymc2/lib/python2.7/site-packages/pandas-0.13.1_213_gc174c3d-py2.7-macosx-10.9-intel.egg/pandas/core/frame.pyc in _sanitize_column(self, key, value)
   2024         elif isinstance(value, Index) or _is_sequence(value):
   2025             if len(value) != len(self.index):
-> 2026                 raise ValueError('Length of values does not match length of '
   2027                                  'index')
   2028 

ValueError: Length of values does not match length of index

We can extract the underlying data as a simple ndarray by accessing the values attribute:


In [81]:
data.values


Out[81]:
array([[1, 'Firmicutes', 632, 2013, 0.0],
       [1, 'Proteobacteria', 1638, 2013, 0.0],
       [1, 'Actinobacteria', 569, 2013, 0.0],
       [1, 'Bacteroidetes', 14, 2013, 0.0],
       [2, 'Firmicutes', 433, 2013, 1.0],
       [2, 'Proteobacteria', 0, 2013, 1.0],
       [2, 'Actinobacteria', 754, 2013, nan],
       [2, 'Bacteroidetes', 555, 2013, nan]], dtype=object)

Notice that because of the mix of string and integer (and NaN) values, the dtype of the array is object. The dtype will automatically be chosen to be as general as needed to accomodate all the columns.

Pandas uses a custom data structure to represent the indices of Series and DataFrames.


In [82]:
data.index


Out[82]:
Int64Index([0, 1, 2, 3, 4, 5, 6, 7], dtype='int64')

Index objects are immutable:


In [83]:
data.index[0] = 15


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-83-42a852cc9eac> in <module>()
----> 1 data.index[0] = 15

/Users/fonnescj/.virtualenvs/pymc2/lib/python2.7/site-packages/pandas-0.13.1_213_gc174c3d-py2.7-macosx-10.9-intel.egg/pandas/core/base.pyc in _disabled(self, *args, **kwargs)
    178         """This method will not function because object is immutable."""
    179         raise TypeError("'%s' does not support mutable operations." %
--> 180                         self.__class__)
    181 
    182     __setitem__ = __setslice__ = __delitem__ = __delslice__ = _disabled

TypeError: '<class 'pandas.core.index.Int64Index'>' does not support mutable operations.

This is so that Index objects can be shared between data structures without fear that they will be changed.


In [84]:
bacteria2.index = bacteria.index

In [85]:
bacteria2


Out[85]:
phylum
Firmicutes         NaN
Proteobacteria     632
Actinobacteria    1638
Bacteroidetes      569
dtype: float64

Importing data

A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure:

genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '<f4')])

Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported.

Let's start with some more bacteria data, stored in csv format.


In [86]:
!cat data/microbiome.csv


Taxon,Patient,Tissue,Stool
Firmicutes,1,632,305
Firmicutes,2,136,4182
Firmicutes,3,1174,703
Firmicutes,4,408,3946
Firmicutes,5,831,8605
Firmicutes,6,693,50
Firmicutes,7,718,717
Firmicutes,8,173,33
Firmicutes,9,228,80
Firmicutes,10,162,3196
Firmicutes,11,372,32
Firmicutes,12,4255,4361
Firmicutes,13,107,1667
Firmicutes,14,96,223
Firmicutes,15,281,2377
Proteobacteria,1,1638,3886
Proteobacteria,2,2469,1821
Proteobacteria,3,839,661
Proteobacteria,4,4414,18
Proteobacteria,5,12044,83
Proteobacteria,6,2310,12
Proteobacteria,7,3053,547
Proteobacteria,8,395,2174
Proteobacteria,9,2651,767
Proteobacteria,10,1195,76
Proteobacteria,11,6857,795
Proteobacteria,12,483,666
Proteobacteria,13,2950,3994
Proteobacteria,14,1541,816
Proteobacteria,15,1307,53
Actinobacteria,1,569,648
Actinobacteria,2,1590,4
Actinobacteria,3,25,2
Actinobacteria,4,259,300
Actinobacteria,5,568,7
Actinobacteria,6,1102,9
Actinobacteria,7,678,377
Actinobacteria,8,260,58
Actinobacteria,9,424,233
Actinobacteria,10,548,21
Actinobacteria,11,201,83
Actinobacteria,12,42,75
Actinobacteria,13,109,59
Actinobacteria,14,51,183
Actinobacteria,15,310,204
Bacteroidetes,1,115,380
Bacteroidetes,2,67,0
Bacteroidetes,3,0,0
Bacteroidetes,4,85,5
Bacteroidetes,5,143,7
Bacteroidetes,6,678,2
Bacteroidetes,7,4829,209
Bacteroidetes,8,74,651
Bacteroidetes,9,169,254
Bacteroidetes,10,106,10
Bacteroidetes,11,73,381
Bacteroidetes,12,30,359
Bacteroidetes,13,51,51
Bacteroidetes,14,2473,2314
Bacteroidetes,15,102,33
Other,1,114,277
Other,2,195,18
Other,3,42,2
Other,4,316,43
Other,5,202,40
Other,6,116,0
Other,7,527,12
Other,8,357,11
Other,9,106,11
Other,10,67,14
Other,11,203,6
Other,12,392,6
Other,13,28,25
Other,14,12,22
Other,15,305,32

This table can be read into a DataFrame using read_csv:


In [87]:
mb = pd.read_csv("data/microbiome.csv")
mb


Out[87]:
             Taxon  Patient  Tissue  Stool
0       Firmicutes        1     632    305
1       Firmicutes        2     136   4182
2       Firmicutes        3    1174    703
3       Firmicutes        4     408   3946
4       Firmicutes        5     831   8605
5       Firmicutes        6     693     50
6       Firmicutes        7     718    717
7       Firmicutes        8     173     33
8       Firmicutes        9     228     80
9       Firmicutes       10     162   3196
10      Firmicutes       11     372     32
11      Firmicutes       12    4255   4361
12      Firmicutes       13     107   1667
13      Firmicutes       14      96    223
14      Firmicutes       15     281   2377
15  Proteobacteria        1    1638   3886
16  Proteobacteria        2    2469   1821
17  Proteobacteria        3     839    661
18  Proteobacteria        4    4414     18
19  Proteobacteria        5   12044     83
               ...      ...     ...    ...

[75 rows x 4 columns]

Notice that read_csv automatically considered the first row in the file to be a header row.

We can override default behavior by customizing some the arguments, like header, names or index_col.


In [88]:
pd.read_csv("data/microbiome.csv", header=None).head()


Out[88]:
            0        1       2      3
0       Taxon  Patient  Tissue  Stool
1  Firmicutes        1     632    305
2  Firmicutes        2     136   4182
3  Firmicutes        3    1174    703
4  Firmicutes        4     408   3946

[5 rows x 4 columns]

read_csv is just a convenience function for read_table, since csv is such a common format:


In [89]:
mb = pd.read_table("data/microbiome.csv", sep=',')

The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats:

sep='\s+'

For a more useful index, we can specify the first two columns, which together provide a unique index to the data.


In [90]:
mb = pd.read_csv("data/microbiome.csv", index_col=['Taxon','Patient'])
mb.head()


Out[90]:
                    Tissue  Stool
Taxon      Patient               
Firmicutes 1           632    305
           2           136   4182
           3          1174    703
           4           408   3946
           5           831   8605

[5 rows x 2 columns]

This is called a hierarchical index, which we will revisit later in the tutorial.

If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument:


In [91]:
pd.read_csv("data/microbiome.csv", skiprows=[3,4,6]).head()


Out[91]:
        Taxon  Patient  Tissue  Stool
0  Firmicutes        1     632    305
1  Firmicutes        2     136   4182
2  Firmicutes        5     831   8605
3  Firmicutes        7     718    717
4  Firmicutes        8     173     33

[5 rows x 4 columns]

Conversely, if we only want to import a small number of rows from, say, a very large data file we can use nrows:


In [92]:
pd.read_csv("data/microbiome.csv", nrows=4)


Out[92]:
        Taxon  Patient  Tissue  Stool
0  Firmicutes        1     632    305
1  Firmicutes        2     136   4182
2  Firmicutes        3    1174    703
3  Firmicutes        4     408   3946

[4 rows x 4 columns]

Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each:


In [93]:
data_chunks = pd.read_csv("data/microbiome.csv", chunksize=15)

mean_tissue = {chunk.Taxon[0]:chunk.Tissue.mean() for chunk in data_chunks}
    
mean_tissue


Out[93]:
{'Actinobacteria': 449.06666666666666,
 'Bacteroidetes': 599.66666666666663,
 'Firmicutes': 684.39999999999998,
 'Other': 198.80000000000001,
 'Proteobacteria': 2943.0666666666666}

Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL.


In [94]:
!cat data/microbiome_missing.csv


Taxon,Patient,Tissue,Stool
Firmicutes,1,632,305
Firmicutes,2,136,4182
Firmicutes,3,,703
Firmicutes,4,408,3946
Firmicutes,5,831,8605
Firmicutes,6,693,50
Firmicutes,7,718,717
Firmicutes,8,173,33
Firmicutes,9,228,NA
Firmicutes,10,162,3196
Firmicutes,11,372,-99999
Firmicutes,12,4255,4361
Firmicutes,13,107,1667
Firmicutes,14,?,223
Firmicutes,15,281,2377
Proteobacteria,1,1638,3886
Proteobacteria,2,2469,1821
Proteobacteria,3,839,661
Proteobacteria,4,4414,18
Proteobacteria,5,12044,83
Proteobacteria,6,2310,12
Proteobacteria,7,3053,547
Proteobacteria,8,395,2174
Proteobacteria,9,2651,767
Proteobacteria,10,1195,76
Proteobacteria,11,6857,795
Proteobacteria,12,483,666
Proteobacteria,13,2950,3994
Proteobacteria,14,1541,816
Proteobacteria,15,1307,53
Actinobacteria,1,569,648
Actinobacteria,2,1590,4
Actinobacteria,3,25,2
Actinobacteria,4,259,300
Actinobacteria,5,568,7
Actinobacteria,6,1102,9
Actinobacteria,7,678,377
Actinobacteria,8,260,58
Actinobacteria,9,424,233
Actinobacteria,10,548,21
Actinobacteria,11,201,83
Actinobacteria,12,42,75
Actinobacteria,13,109,59
Actinobacteria,14,51,183
Actinobacteria,15,310,204
Bacteroidetes,1,115,380
Bacteroidetes,2,67,0
Bacteroidetes,3,0,0
Bacteroidetes,4,85,5
Bacteroidetes,5,143,7
Bacteroidetes,6,678,2
Bacteroidetes,7,4829,209
Bacteroidetes,8,74,651
Bacteroidetes,9,169,254
Bacteroidetes,10,106,10
Bacteroidetes,11,73,381
Bacteroidetes,12,30,359
Bacteroidetes,13,51,51
Bacteroidetes,14,2473,2314
Bacteroidetes,15,102,33
Other,1,114,277
Other,2,195,18
Other,3,42,2
Other,4,316,43
Other,5,202,40
Other,6,116,0
Other,7,527,12
Other,8,357,11
Other,9,106,11
Other,10,67,14
Other,11,203,6
Other,12,392,6
Other,13,28,25
Other,14,12,22
Other,15,305,32

In [95]:
pd.read_csv("data/microbiome_missing.csv").head(20)


Out[95]:
             Taxon  Patient Tissue  Stool
0       Firmicutes        1    632    305
1       Firmicutes        2    136   4182
2       Firmicutes        3    NaN    703
3       Firmicutes        4    408   3946
4       Firmicutes        5    831   8605
5       Firmicutes        6    693     50
6       Firmicutes        7    718    717
7       Firmicutes        8    173     33
8       Firmicutes        9    228    NaN
9       Firmicutes       10    162   3196
10      Firmicutes       11    372 -99999
11      Firmicutes       12   4255   4361
12      Firmicutes       13    107   1667
13      Firmicutes       14      ?    223
14      Firmicutes       15    281   2377
15  Proteobacteria        1   1638   3886
16  Proteobacteria        2   2469   1821
17  Proteobacteria        3    839    661
18  Proteobacteria        4   4414     18
19  Proteobacteria        5  12044     83

[20 rows x 4 columns]

Above, Pandas recognized NA and an empty field as missing data.


In [96]:
pd.isnull(pd.read_csv("data/microbiome_missing.csv")).head(20)


Out[96]:
    Taxon Patient Tissue  Stool
0   False   False  False  False
1   False   False  False  False
2   False   False   True  False
3   False   False  False  False
4   False   False  False  False
5   False   False  False  False
6   False   False  False  False
7   False   False  False  False
8   False   False  False   True
9   False   False  False  False
10  False   False  False  False
11  False   False  False  False
12  False   False  False  False
13  False   False  False  False
14  False   False  False  False
15  False   False  False  False
16  False   False  False  False
17  False   False  False  False
18  False   False  False  False
19  False   False  False  False

[20 rows x 4 columns]

Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument:


In [97]:
pd.read_csv("data/microbiome_missing.csv", na_values=['?', -99999]).head(20)


Out[97]:
             Taxon  Patient  Tissue  Stool
0       Firmicutes        1     632    305
1       Firmicutes        2     136   4182
2       Firmicutes        3     NaN    703
3       Firmicutes        4     408   3946
4       Firmicutes        5     831   8605
5       Firmicutes        6     693     50
6       Firmicutes        7     718    717
7       Firmicutes        8     173     33
8       Firmicutes        9     228    NaN
9       Firmicutes       10     162   3196
10      Firmicutes       11     372    NaN
11      Firmicutes       12    4255   4361
12      Firmicutes       13     107   1667
13      Firmicutes       14     NaN    223
14      Firmicutes       15     281   2377
15  Proteobacteria        1    1638   3886
16  Proteobacteria        2    2469   1821
17  Proteobacteria        3     839    661
18  Proteobacteria        4    4414     18
19  Proteobacteria        5   12044     83

[20 rows x 4 columns]

These can be specified on a column-wise basis using an appropriate dict as the argument for na_values.

There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include Excel, JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis.

Pandas Fundamentals

This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.

For some variety, we will leave our digestive tract bacteria behind and employ some baseball data.


In [98]:
baseball = pd.read_csv("data/baseball.csv", index_col='id')
baseball.head()


Out[98]:
          player  year  stint team  lg   g  ab  r   h  X2b  X3b  hr  rbi  sb  \
id                                                                             
88641  womacto01  2006      2  CHN  NL  19  50  6  14    1    0   1    2   1   
88643  schilcu01  2006      1  BOS  AL  31   2  0   1    0    0   0    0   0   
88645  myersmi01  2006      1  NYA  AL  62   0  0   0    0    0   0    0   0   
88649  helliri01  2006      1  MIL  NL  20   3  0   0    0    0   0    0   0   
88650  johnsra05  2006      1  NYA  AL  33   6  0   1    0    0   0    0   0   

       cs  bb  so  ibb  hbp  sh  sf  gidp  
id                                         
88641   1   4   4    0    0   3   0     0  
88643   0   0   1    0    0   0   0     0  
88645   0   0   0    0    0   0   0     0  
88649   0   0   2    0    0   0   0     0  
88650   0   0   4    0    0   0   0     0  

[5 rows x 22 columns]

Notice that we specified the id column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining player and year:


In [99]:
player_id = baseball.player + baseball.year.astype(str)
baseball_newind = baseball.copy()
baseball_newind.index = player_id
baseball_newind.head()


Out[99]:
                  player  year  stint team  lg   g  ab  r   h  X2b  X3b  hr  \
womacto012006  womacto01  2006      2  CHN  NL  19  50  6  14    1    0   1   
schilcu012006  schilcu01  2006      1  BOS  AL  31   2  0   1    0    0   0   
myersmi012006  myersmi01  2006      1  NYA  AL  62   0  0   0    0    0   0   
helliri012006  helliri01  2006      1  MIL  NL  20   3  0   0    0    0   0   
johnsra052006  johnsra05  2006      1  NYA  AL  33   6  0   1    0    0   0   

               rbi  sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
womacto012006    2   1   1   4   4    0    0   3   0     0  
schilcu012006    0   0   0   0   1    0    0   0   0     0  
myersmi012006    0   0   0   0   0    0    0   0   0     0  
helliri012006    0   0   0   0   2    0    0   0   0     0  
johnsra052006    0   0   0   0   4    0    0   0   0     0  

[5 rows x 22 columns]

In [100]:
baseball_newind.index.is_unique


Out[100]:
False

So, indices need not be unique. Our choice is not unique because some players change teams within years. The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:


In [101]:
baseball_newind.ix['wickmbo012007']


Out[101]:
                  player  year  stint team  lg   g  ab  r  h  X2b  X3b  hr  \
wickmbo012007  wickmbo01  2007      2  ARI  NL   8   0  0  0    0    0   0   
wickmbo012007  wickmbo01  2007      1  ATL  NL  47   0  0  0    0    0   0   

               rbi  sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
wickmbo012007    0   0   0   0   0    0    0   0   0     0  
wickmbo012007    0   0   0   0   0    0    0   0   0     0  

[2 rows x 22 columns]

We will learn more about indexing below.

Manipulating indices

Reindexing allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.

A simple use of reindex is to alter the order of the rows:


In [102]:
baseball.reindex(baseball.index[::-1]).head()


Out[102]:
          player  year  stint team  lg    g   ab   r    h  X2b  X3b  hr  rbi  \
id                                                                             
89534  alomasa02  2007      1  NYN  NL    8   22   1    3    1    0   0    0   
89533   aloumo01  2007      1  NYN  NL   87  328  51  112   19    1  13   49   
89530  ausmubr01  2007      1  HOU  NL  117  349  38   82   16    3   3   25   
89526  benitar01  2007      1  SFN  NL   19    0   0    0    0    0   0    0   
89525  benitar01  2007      2  FLO  NL   34    0   0    0    0    0   0    0   

       sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
id                                             
89534   0   0   0   3    0    0   0   0     0  
89533   3   0  27  30    5    2   0   3    13  
89530   6   1  37  74    3    6   4   1    11  
89526   0   0   0   0    0    0   0   0     0  
89525   0   0   0   0    0    0   0   0     0  

[5 rows x 22 columns]

Notice that the id index is not sequential. Say we wanted to populate the table with every id value. We could specify and index that is a sequence from the first to the last id numbers in the database, and Pandas would fill in the missing data with NaN values:


In [103]:
id_range = range(baseball.index.values.min(), baseball.index.values.max())
baseball.reindex(id_range).head()


Out[103]:
          player  year  stint team   lg   g  ab   r   h  X2b  X3b  hr  rbi  \
88641  womacto01  2006      2  CHN   NL  19  50   6  14    1    0   1    2   
88642        NaN   NaN    NaN  NaN  NaN NaN NaN NaN NaN  NaN  NaN NaN  NaN   
88643  schilcu01  2006      1  BOS   AL  31   2   0   1    0    0   0    0   
88644        NaN   NaN    NaN  NaN  NaN NaN NaN NaN NaN  NaN  NaN NaN  NaN   
88645  myersmi01  2006      1  NYA   AL  62   0   0   0    0    0   0    0   

       sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
88641   1   1   4   4    0    0   3   0     0  
88642 NaN NaN NaN NaN  NaN  NaN NaN NaN   NaN  
88643   0   0   0   1    0    0   0   0     0  
88644 NaN NaN NaN NaN  NaN  NaN NaN NaN   NaN  
88645   0   0   0   0    0    0   0   0     0  

[5 rows x 22 columns]

Missing values can be filled as desired, either with selected values, or by rule:


In [104]:
baseball.reindex(id_range, method='ffill', columns=['player','year']).head()


Out[104]:
          player  year
88641  womacto01  2006
88642  womacto01  2006
88643  schilcu01  2006
88644  schilcu01  2006
88645  myersmi01  2006

[5 rows x 2 columns]

In [105]:
baseball.reindex(id_range, fill_value='mr.nobody', columns=['player']).head()


Out[105]:
          player
88641  womacto01
88642  mr.nobody
88643  schilcu01
88644  mr.nobody
88645  myersmi01

[5 rows x 1 columns]

Keep in mind that reindex does not work if we pass a non-unique index series.

We can remove rows or columns via the drop method:


In [106]:
baseball.shape


Out[106]:
(100, 22)

In [107]:
baseball.drop([89525, 89526])


Out[107]:
          player  year  stint team  lg    g   ab   r    h  X2b  X3b  hr  rbi  \
id                                                                             
88641  womacto01  2006      2  CHN  NL   19   50   6   14    1    0   1    2   
88643  schilcu01  2006      1  BOS  AL   31    2   0    1    0    0   0    0   
88645  myersmi01  2006      1  NYA  AL   62    0   0    0    0    0   0    0   
88649  helliri01  2006      1  MIL  NL   20    3   0    0    0    0   0    0   
88650  johnsra05  2006      1  NYA  AL   33    6   0    1    0    0   0    0   
88652  finlest01  2006      1  SFN  NL  139  426  66  105   21   12   6   40   
88653  gonzalu01  2006      1  ARI  NL  153  586  93  159   52    2  15   73   
88662   seleaa01  2006      1  LAN  NL   28   26   2    5    1    0   0    0   
89177  francju01  2007      2  ATL  NL   15   40   1   10    3    0   0    8   
89178  francju01  2007      1  NYN  NL   40   50   7   10    0    0   1    8   
89330   zaungr01  2007      1  TOR  AL  110  331  43   80   24    1  10   52   
89333  witasja01  2007      1  TBA  AL    3    0   0    0    0    0   0    0   
89334  williwo02  2007      1  HOU  NL   33   59   3    6    0    0   1    2   
89335  wickmbo01  2007      2  ARI  NL    8    0   0    0    0    0   0    0   
89336  wickmbo01  2007      1  ATL  NL   47    0   0    0    0    0   0    0   
89337  whitero02  2007      1  MIN  AL   38  109   8   19    4    0   4   20   
89338  whiteri01  2007      1  HOU  NL   20    1   0    0    0    0   0    0   
89339  wellsda01  2007      2  LAN  NL    7   15   2    4    1    0   0    1   
89340  wellsda01  2007      1  SDN  NL   22   38   1    4    0    0   0    0   
89341  weathda01  2007      1  CIN  NL   67    0   0    0    0    0   0    0   
             ...   ...    ...  ... ...  ...  ... ...  ...  ...  ... ...  ...   

       sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
id                                             
88641   1   1   4   4    0    0   3   0     0  
88643   0   0   0   1    0    0   0   0     0  
88645   0   0   0   0    0    0   0   0     0  
88649   0   0   0   2    0    0   0   0     0  
88650   0   0   0   4    0    0   0   0     0  
88652   7   0  46  55    2    2   3   4     6  
88653   0   1  69  58   10    7   0   6    14  
88662   0   0   1   7    0    0   6   0     1  
89177   0   0   4  10    1    0   0   1     1  
89178   2   1  10  13    0    0   0   1     1  
89330   0   0  51  55    8    2   1   6     9  
89333   0   0   0   0    0    0   0   0     0  
89334   0   0   0  25    0    0   5   0     1  
89335   0   0   0   0    0    0   0   0     0  
89336   0   0   0   0    0    0   0   0     0  
89337   0   0   6  19    0    3   0   1     2  
89338   0   0   0   1    0    0   0   0     0  
89339   0   0   0   6    0    0   0   0     0  
89340   0   0   0  12    0    0   4   0     0  
89341   0   0   0   0    0    0   0   0     0  
      ... ... ... ...  ...  ... ... ...   ...  

[98 rows x 22 columns]

In [108]:
baseball.drop(['ibb','hbp'], axis=1)


Out[108]:
          player  year  stint team  lg    g   ab   r    h  X2b  X3b  hr  rbi  \
id                                                                             
88641  womacto01  2006      2  CHN  NL   19   50   6   14    1    0   1    2   
88643  schilcu01  2006      1  BOS  AL   31    2   0    1    0    0   0    0   
88645  myersmi01  2006      1  NYA  AL   62    0   0    0    0    0   0    0   
88649  helliri01  2006      1  MIL  NL   20    3   0    0    0    0   0    0   
88650  johnsra05  2006      1  NYA  AL   33    6   0    1    0    0   0    0   
88652  finlest01  2006      1  SFN  NL  139  426  66  105   21   12   6   40   
88653  gonzalu01  2006      1  ARI  NL  153  586  93  159   52    2  15   73   
88662   seleaa01  2006      1  LAN  NL   28   26   2    5    1    0   0    0   
89177  francju01  2007      2  ATL  NL   15   40   1   10    3    0   0    8   
89178  francju01  2007      1  NYN  NL   40   50   7   10    0    0   1    8   
89330   zaungr01  2007      1  TOR  AL  110  331  43   80   24    1  10   52   
89333  witasja01  2007      1  TBA  AL    3    0   0    0    0    0   0    0   
89334  williwo02  2007      1  HOU  NL   33   59   3    6    0    0   1    2   
89335  wickmbo01  2007      2  ARI  NL    8    0   0    0    0    0   0    0   
89336  wickmbo01  2007      1  ATL  NL   47    0   0    0    0    0   0    0   
89337  whitero02  2007      1  MIN  AL   38  109   8   19    4    0   4   20   
89338  whiteri01  2007      1  HOU  NL   20    1   0    0    0    0   0    0   
89339  wellsda01  2007      2  LAN  NL    7   15   2    4    1    0   0    1   
89340  wellsda01  2007      1  SDN  NL   22   38   1    4    0    0   0    0   
89341  weathda01  2007      1  CIN  NL   67    0   0    0    0    0   0    0   
             ...   ...    ...  ... ...  ...  ... ...  ...  ...  ... ...  ...   

       sb  cs  bb  so  sh  sf  gidp  
id                                   
88641   1   1   4   4   3   0     0  
88643   0   0   0   1   0   0     0  
88645   0   0   0   0   0   0     0  
88649   0   0   0   2   0   0     0  
88650   0   0   0   4   0   0     0  
88652   7   0  46  55   3   4     6  
88653   0   1  69  58   0   6    14  
88662   0   0   1   7   6   0     1  
89177   0   0   4  10   0   1     1  
89178   2   1  10  13   0   1     1  
89330   0   0  51  55   1   6     9  
89333   0   0   0   0   0   0     0  
89334   0   0   0  25   5   0     1  
89335   0   0   0   0   0   0     0  
89336   0   0   0   0   0   0     0  
89337   0   0   6  19   0   1     2  
89338   0   0   0   1   0   0     0  
89339   0   0   0   6   0   0     0  
89340   0   0   0  12   4   0     0  
89341   0   0   0   0   0   0     0  
      ... ... ... ... ... ...   ...  

[100 rows x 20 columns]

Indexing and Selection

Indexing works analogously to indexing in NumPy arrays, except we can use the labels in the Index object to extract values in addition to arrays of integers.


In [109]:
# Sample Series object
hits = baseball_newind.h
hits


Out[109]:
womacto012006     14
schilcu012006      1
myersmi012006      0
helliri012006      0
johnsra052006      1
finlest012006    105
gonzalu012006    159
seleaa012006       5
...
cirilje012007     40
bondsba012007     94
biggicr012007    130
benitar012007      0
benitar012007      0
ausmubr012007     82
aloumo012007     112
alomasa022007      3
Name: h, Length: 100, dtype: int64

In [110]:
# Numpy-style indexing
hits[:3]


Out[110]:
womacto012006    14
schilcu012006     1
myersmi012006     0
Name: h, dtype: int64

In [111]:
# Indexing by label
hits[['womacto012006','schilcu012006']]


Out[111]:
womacto012006    14
schilcu012006     1
Name: h, dtype: int64

We can also slice with data labels, since they have an intrinsic order within the Index:


In [112]:
hits.ix['womacto012006':'gonzalu012006']


Out[112]:
womacto012006     14
schilcu012006      1
myersmi012006      0
helliri012006      0
johnsra052006      1
finlest012006    105
gonzalu012006    159
Name: h, dtype: int64

In [113]:
hits['womacto012006':'gonzalu012006'] = 5
hits


Out[113]:
womacto012006    5
schilcu012006    5
myersmi012006    5
helliri012006    5
johnsra052006    5
finlest012006    5
gonzalu012006    5
seleaa012006     5
...
cirilje012007     40
bondsba012007     94
biggicr012007    130
benitar012007      0
benitar012007      0
ausmubr012007     82
aloumo012007     112
alomasa022007      3
Name: h, Length: 100, dtype: int64

In a DataFrame we can slice along either or both axes:


In [114]:
baseball_newind[['h','ab']]


Out[114]:
                h   ab
womacto012006   5   50
schilcu012006   5    2
myersmi012006   5    0
helliri012006   5    3
johnsra052006   5    6
finlest012006   5  426
gonzalu012006   5  586
seleaa012006    5   26
francju012007  10   40
francju012007  10   50
zaungr012007   80  331
witasja012007   0    0
williwo022007   6   59
wickmbo012007   0    0
wickmbo012007   0    0
whitero022007  19  109
whiteri012007   0    1
wellsda012007   4   15
wellsda012007   4   38
weathda012007   0    0
              ...  ...

[100 rows x 2 columns]

The indexing field ix allows us to select subsets of rows and columns in an intuitive way:


In [115]:
baseball_newind.ix['gonzalu012006', ['h','X2b', 'X3b', 'hr']]


Out[115]:
h       5
X2b    52
X3b     2
hr     15
Name: gonzalu012006, dtype: object

In [116]:
baseball_newind.ix[['gonzalu012006','finlest012006'], 5:8]


Out[116]:
                 g   ab   r
gonzalu012006  153  586  93
finlest012006  139  426  66

[2 rows x 3 columns]

In [117]:
baseball_newind.ix[:'myersmi012006', 'hr']


Out[117]:
womacto012006    1
schilcu012006    0
myersmi012006    0
Name: hr, dtype: int64

Operations

DataFrame and Series objects allow for several operations to take place either on a single object, or between two or more objects.

For example, we can perform arithmetic on the elements of two objects, such as combining baseball statistics across years:


In [118]:
hr2006 = baseball[baseball.year==2006].xs('hr', axis=1)
hr2006.index = baseball.player[baseball.year==2006]

hr2007 = baseball[baseball.year==2007].xs('hr', axis=1)
hr2007.index = baseball.player[baseball.year==2007]

In [119]:
hr2006 = pd.Series(baseball.hr[baseball.year==2006].values, index=baseball.player[baseball.year==2006])
hr2007 = pd.Series(baseball.hr[baseball.year==2007].values, index=baseball.player[baseball.year==2007])

In [120]:
hr_total = hr2006 + hr2007
hr_total


Out[120]:
player
alomasa02   NaN
aloumo01    NaN
ausmubr01   NaN
benitar01   NaN
benitar01   NaN
biggicr01   NaN
bondsba01   NaN
cirilje01   NaN
...
whiteri01   NaN
whitero02   NaN
wickmbo01   NaN
wickmbo01   NaN
williwo02   NaN
witasja01   NaN
womacto01   NaN
zaungr01    NaN
Length: 94, dtype: float64

Pandas' data alignment places NaN values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years.


In [121]:
hr_total[hr_total.notnull()]


Out[121]:
player
finlest01     7
gonzalu01    30
johnsra05     0
myersmi01     0
schilcu01     0
seleaa01      0
dtype: float64

While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with NaN. We can use the add method to calculate player home run totals by using the fill_value argument to insert a zero for home runs where labels do not overlap:


In [122]:
hr2007.add(hr2006, fill_value=0)


Out[122]:
player
alomasa02     0
aloumo01     13
ausmubr01     3
benitar01     0
benitar01     0
biggicr01    10
bondsba01    28
cirilje01     0
...
whiteri01     0
whitero02     4
wickmbo01     0
wickmbo01     0
williwo02     1
witasja01     0
womacto01     1
zaungr01     10
Length: 94, dtype: float64

Operations can also be broadcast between rows or columns.

For example, if we subtract the maximum number of home runs hit from the hr column, we get how many fewer than the maximum were hit by each player:


In [123]:
baseball.hr - baseball.hr.max()


Out[123]:
id
88641   -34
88643   -35
88645   -35
88649   -35
88650   -35
88652   -29
88653   -20
88662   -35
...
89502   -33
89521    -7
89523   -25
89525   -35
89526   -35
89530   -32
89533   -22
89534   -35
Name: hr, Length: 100, dtype: int64

Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics


In [124]:
baseball.ix[89521]["player"]


Out[124]:
'bondsba01'

In [125]:
stats = baseball[['h','X2b', 'X3b', 'hr']]
diff = stats - stats.xs(89521)
diff[:10]


Out[125]:
        h  X2b  X3b  hr
id                     
88641 -80  -13    0 -27
88643 -93  -14    0 -28
88645 -94  -14    0 -28
88649 -94  -14    0 -28
88650 -93  -14    0 -28
88652  11    7   12 -22
88653  65   38    2 -13
88662 -89  -13    0 -28
89177 -84  -11    0 -28
89178 -84  -14    0 -27

[10 rows x 4 columns]

We can also apply functions to each column or row of a DataFrame


In [126]:
stats.apply(np.median)


Out[126]:
h      8
X2b    1
X3b    0
hr     0
dtype: float64

In [127]:
stat_range = lambda x: x.max() - x.min()
stats.apply(stat_range)


Out[127]:
h      159
X2b     52
X3b     12
hr      35
dtype: int64

Lets use apply to calculate a meaningful baseball statistics, slugging percentage:

$$SLG = \frac{1B + (2 \times 2B) + (3 \times 3B) + (4 \times HR)}{AB}$$

And just for fun, we will format the resulting estimate.


In [128]:
slg = lambda x: (x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr'])/(x['ab']+1e-6)
baseball.apply(slg, axis=1).apply(lambda x: '%.3f' % x)


Out[128]:
id
88641    0.360
88643    0.500
88645    0.000
88649    0.000
88650    0.167
88652    0.394
88653    0.444
88662    0.231
...
89502    0.386
89521    0.565
89523    0.381
89525    0.000
89526    0.000
89530    0.324
89533    0.524
89534    0.182
Length: 100, dtype: object

Sorting and Ranking

Pandas objects include methods for re-ordering data.


In [129]:
baseball_newind.sort_index().head()


Out[129]:
                  player  year  stint team  lg    g   ab   r    h  X2b  X3b  \
alomasa022007  alomasa02  2007      1  NYN  NL    8   22   1    3    1    0   
aloumo012007    aloumo01  2007      1  NYN  NL   87  328  51  112   19    1   
ausmubr012007  ausmubr01  2007      1  HOU  NL  117  349  38   82   16    3   
benitar012007  benitar01  2007      2  FLO  NL   34    0   0    0    0    0   
benitar012007  benitar01  2007      1  SFN  NL   19    0   0    0    0    0   

               hr  rbi  sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
alomasa022007   0    0   0   0   0   3    0    0   0   0     0  
aloumo012007   13   49   3   0  27  30    5    2   0   3    13  
ausmubr012007   3   25   6   1  37  74    3    6   4   1    11  
benitar012007   0    0   0   0   0   0    0    0   0   0     0  
benitar012007   0    0   0   0   0   0    0    0   0   0     0  

[5 rows x 22 columns]

In [130]:
baseball_newind.sort_index(ascending=False).head()


Out[130]:
                  player  year  stint team  lg    g   ab   r   h  X2b  X3b  \
zaungr012007    zaungr01  2007      1  TOR  AL  110  331  43  80   24    1   
womacto012006  womacto01  2006      2  CHN  NL   19   50   6   5    1    0   
witasja012007  witasja01  2007      1  TBA  AL    3    0   0   0    0    0   
williwo022007  williwo02  2007      1  HOU  NL   33   59   3   6    0    0   
wickmbo012007  wickmbo01  2007      2  ARI  NL    8    0   0   0    0    0   

               hr  rbi  sb  cs  bb  so  ibb  hbp  sh  sf  gidp  
zaungr012007   10   52   0   0  51  55    8    2   1   6     9  
womacto012006   1    2   1   1   4   4    0    0   3   0     0  
witasja012007   0    0   0   0   0   0    0    0   0   0     0  
williwo022007   1    2   0   0   0  25    0    0   5   0     1  
wickmbo012007   0    0   0   0   0   0    0    0   0   0     0  

[5 rows x 22 columns]

In [131]:
baseball_newind.sort_index(axis=1).head()


Out[131]:
               X2b  X3b  ab  bb  cs   g  gidp  h  hbp  hr  ibb  lg     player  \
womacto012006    1    0  50   4   1  19     0  5    0   1    0  NL  womacto01   
schilcu012006    0    0   2   0   0  31     0  5    0   0    0  AL  schilcu01   
myersmi012006    0    0   0   0   0  62     0  5    0   0    0  AL  myersmi01   
helliri012006    0    0   3   0   0  20     0  5    0   0    0  NL  helliri01   
johnsra052006    0    0   6   0   0  33     0  5    0   0    0  AL  johnsra05   

               r  rbi  sb  sf  sh  so  stint team  year  
womacto012006  6    2   1   0   3   4      2  CHN  2006  
schilcu012006  0    0   0   0   0   1      1  BOS  2006  
myersmi012006  0    0   0   0   0   0      1  NYA  2006  
helliri012006  0    0   0   0   0   2      1  MIL  2006  
johnsra052006  0    0   0   0   0   4      1  NYA  2006  

[5 rows x 22 columns]

We can also use order to sort a Series by value, rather than by label.


In [132]:
baseball.hr.order(ascending=False)


Out[132]:
id
89360    35
89462    30
89521    28
89361    26
89378    25
89489    24
89374    21
89371    21
...
89335    0
89333    0
89177    0
88662    0
88650    0
88649    0
88645    0
88643    0
Name: hr, Length: 100, dtype: int64

For a DataFrame, we can sort according to the values of one or more columns using the by argument of sort_index:


In [133]:
baseball[['player','sb','cs']].sort_index(ascending=[False,True], by=['sb', 'cs']).head(10)


Out[133]:
          player  sb  cs
id                      
89378  sheffga01  22   5
89430  loftoke01  21   4
89347  vizquom01  14   6
89463  greensh01  11   1
88652  finlest01   7   0
89462  griffke02   6   1
89530  ausmubr01   6   1
89466  gonzalu01   6   2
89521  bondsba01   5   0
89438  kleskry01   5   1

[10 rows x 3 columns]

Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.


In [134]:
baseball.hr.rank()


Out[134]:
id
88641    62.5
88643    29.0
88645    29.0
88649    29.0
88650    29.0
88652    76.0
88653    89.5
88662    29.0
...
89502    69.0
89521    98.0
89523    83.5
89525    29.0
89526    29.0
89530    71.5
89533    88.0
89534    29.0
Name: hr, Length: 100, dtype: float64

Ties are assigned the mean value of the tied ranks, which may result in decimal values.


In [135]:
pd.Series([100,100]).rank()


Out[135]:
0    1.5
1    1.5
dtype: float64

Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:


In [136]:
baseball.hr.rank(method='first')


Out[136]:
id
88641    58
88643     1
88645     2
88649     3
88650     4
88652    75
88653    89
88662     5
...
89502    70
89521    98
89523    85
89525    55
89526    56
89530    72
89533    88
89534    57
Name: hr, Length: 100, dtype: float64

Calling the DataFrame's rank method results in the ranks of all columns:


In [137]:
baseball.rank(ascending=False).head()


Out[137]:
       player  year  stint  team    lg     g    ab     r     h   X2b   X3b  \
id                                                                           
88641     2.0  96.5      7  82.0  31.5  70.0  47.5  40.5  39.0  50.5  63.5   
88643    37.5  96.5     57  88.0  81.5  55.5  73.0  81.0  63.5  78.0  63.5   
88645    47.5  96.5     57  40.5  81.5  36.0  91.0  81.0  84.5  78.0  63.5   
88649    66.0  96.5     57  47.0  31.5  67.5  69.0  81.0  84.5  78.0  63.5   
88650    61.5  96.5     57  40.5  81.5  51.0  64.5  81.0  63.5  78.0  63.5   

         hr   rbi    sb    cs    bb  so  ibb   hbp    sh  sf  gidp  
id                                                                  
88641  38.5  51.0  24.5  17.5  44.5  59   66  65.5  16.0  70  76.5  
88643  72.0  78.5  63.5  62.5  79.0  73   66  65.5  67.5  70  76.5  
88645  72.0  78.5  63.5  62.5  79.0  89   66  65.5  67.5  70  76.5  
88649  72.0  78.5  63.5  62.5  79.0  67   66  65.5  67.5  70  76.5  
88650  72.0  78.5  63.5  62.5  79.0  59   66  65.5  67.5  70  76.5  

[5 rows x 22 columns]

In [138]:
baseball[['r','h','hr']].rank(ascending=False).head()


Out[138]:
          r     h    hr
id                     
88641  40.5  39.0  38.5
88643  81.0  63.5  72.0
88645  81.0  84.5  72.0
88649  81.0  84.5  72.0
88650  81.0  63.5  72.0

[5 rows x 3 columns]

Missing data

The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.

Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).


In [139]:
foo = pd.Series([np.nan, -3, None, 'foobar'])
foo


Out[139]:
0       NaN
1        -3
2      None
3    foobar
dtype: object

In [140]:
foo.isnull()


Out[140]:
0     True
1    False
2     True
3    False
dtype: bool

Missing values may be dropped or indexed out:


In [141]:
bacteria2


Out[141]:
phylum
Firmicutes         NaN
Proteobacteria     632
Actinobacteria    1638
Bacteroidetes      569
dtype: float64

In [142]:
bacteria2.dropna()


Out[142]:
phylum
Proteobacteria     632
Actinobacteria    1638
Bacteroidetes      569
dtype: float64

In [143]:
bacteria2[bacteria2.notnull()]


Out[143]:
phylum
Proteobacteria     632
Actinobacteria    1638
Bacteroidetes      569
dtype: float64

By default, dropna drops entire rows in which one or more values are missing.


In [144]:
data


Out[144]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013        NaN
7        2   Bacteroidetes    555  2013        NaN

[8 rows x 5 columns]

In [145]:
data.dropna()


Out[145]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1

[6 rows x 5 columns]

This can be overridden by passing the how='all' argument, which only drops a row when every field is a missing value.


In [146]:
data.dropna(how='all')


Out[146]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013        NaN
7        2   Bacteroidetes    555  2013        NaN

[8 rows x 5 columns]

This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.


In [147]:
data.ix[7, 'year'] = float('Nan')
data


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-147-829109276967> in <module>()
----> 1 data.ix[7, 'year'] = nan
      2 data

NameError: name 'nan' is not defined

In [148]:
data.dropna(thresh=4)


Out[148]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013        NaN
7        2   Bacteroidetes    555  2013        NaN

[8 rows x 5 columns]

This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects.

If we want to drop missing values column-wise instead of row-wise, we use axis=1.


In [149]:
data.dropna(axis=1)


Out[149]:
   patient          phylum  value  year
0        1      Firmicutes    632  2013
1        1  Proteobacteria   1638  2013
2        1  Actinobacteria    569  2013
3        1   Bacteroidetes     14  2013
4        2      Firmicutes    433  2013
5        2  Proteobacteria      0  2013
6        2  Actinobacteria    754  2013
7        2   Bacteroidetes    555  2013

[8 rows x 4 columns]

Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna argument.


In [150]:
bacteria2.fillna(0)


Out[150]:
phylum
Firmicutes           0
Proteobacteria     632
Actinobacteria    1638
Bacteroidetes      569
dtype: float64

In [151]:
data.fillna({'year': 2013, 'treatment':2})


Out[151]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013          2
7        2   Bacteroidetes    555  2013          2

[8 rows x 5 columns]

Notice that fillna by default returns a new object with the desired filling behavior, rather than changing the Series or DataFrame in place (in general, we like to do this, by the way!).


In [152]:
data


Out[152]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013        NaN
7        2   Bacteroidetes    555  2013        NaN

[8 rows x 5 columns]

We can alter values in-place using inplace=True.


In [153]:
_ = data.year.fillna(2013, inplace=True)
data


Out[153]:
   patient          phylum  value  year  treatment
0        1      Firmicutes    632  2013          0
1        1  Proteobacteria   1638  2013          0
2        1  Actinobacteria    569  2013          0
3        1   Bacteroidetes     14  2013          0
4        2      Firmicutes    433  2013          1
5        2  Proteobacteria      0  2013          1
6        2  Actinobacteria    754  2013        NaN
7        2   Bacteroidetes    555  2013        NaN

[8 rows x 5 columns]

Missing values can also be interpolated, using any one of a variety of methods:


In [154]:
bacteria2.fillna(method='bfill')


Out[154]:
phylum
Firmicutes         632
Proteobacteria     632
Actinobacteria    1638
Bacteroidetes      569
dtype: float64

In [155]:
bacteria2.fillna(bacteria2.mean())


Out[155]:
phylum
Firmicutes         946.333333
Proteobacteria     632.000000
Actinobacteria    1638.000000
Bacteroidetes      569.000000
dtype: float64

Merging and joining DataFrame objects

In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.

The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.

For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.


In [156]:
segments = pd.read_csv("data/AIS/transit_segments.csv")
segments.head()


Out[156]:
   mmsi               name  transit  segment  seg_length  avg_sog  min_sog  \
0     1        Us Govt Ves        1        1         5.1     13.2      9.2   
1     1  Dredge Capt Frank        1        1        13.5     18.6     10.4   
2     1      Us Gov Vessel        1        1         4.3     16.2     10.3   
3     1      Us Gov Vessel        2        1         9.2     15.4     14.5   
4     1  Dredge Capt Frank        2        1         9.2     15.4     14.6   

   max_sog  pdgt10        st_time       end_time  
0     14.5    96.5  2/10/09 16:03  2/10/09 16:27  
1     20.6   100.0   4/6/09 14:31   4/6/09 15:20  
2     20.5   100.0   4/6/09 14:36   4/6/09 14:55  
3     16.1   100.0  4/10/09 17:58  4/10/09 18:34  
4     16.2   100.0  4/10/09 17:59  4/10/09 18:35  

[5 rows x 11 columns]

In addition to the behavior of each vessel, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table.


In [157]:
vessels = pd.read_csv("data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()


Out[157]:
      num_names                                              names sov  \
mmsi                                                                     
1             8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y   
9             3                         000000009/Raven/Shearwater   N   
21            1                                      Us Gov Vessel   Y   
74            2                                  Mcfaul/Sarah Bell   N   
103           3           Ron G/Us Navy Warship 103/Us Warship 103   Y   

         flag flag_type  num_loas                                    loa  \
mmsi                                                                       
1     Unknown   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0   
9     Unknown   Unknown         2                              50.0/62.0   
21    Unknown   Unknown         1                                  208.0   
74    Unknown   Unknown         1                                  155.0   
103   Unknown   Unknown         2                             26.0/155.0   

      max_loa  num_types                             type  
mmsi                                                       
1         156          4  Dredging/MilOps/Reserved/Towing  
9          62          2                     Pleasure/Tug  
21        208          1                          Unknown  
74        155          1                          Unknown  
103       155          2                   Tanker/Unknown  

[5 rows x 10 columns]

In [158]:
vessels.type.value_counts()


Out[158]:
Cargo        5622
Tanker       2440
Pleasure      601
Tug           221
Sailing       205
Fishing       200
Other         178
Passenger     150
...
Reserved/Tanker/Towing/Tug    1
Cargo/Reserved/Unknown        1
Reserved/Towing/Tug           1
BigTow/Unknown                1
Fishing/Law                   1
BigTow/Towing/WIG             1
Towing/Unknown/WIG            1
AntiPol/Fishing/Pleasure      1
Length: 206, dtype: int64

The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.

In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example:


In [159]:
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=range(3)+range(3), score=np.random.random(size=6)))

df1, df2


Out[159]:
(   age  id
 0   26   0
 1   20   1
 2   25   2
 3   24   3
 
 [4 rows x 2 columns],    id     score
 0   0  0.176817
 1   1  0.713290
 2   2  0.704577
 3   0  0.948222
 4   1  0.974605
 5   2  0.760699
 
 [6 rows x 2 columns])

In [160]:
pd.merge(df1, df2)


Out[160]:
   age  id     score
0   26   0  0.176817
1   26   0  0.948222
2   20   1  0.713290
3   20   1  0.974605
4   25   2  0.704577
5   25   2  0.760699

[6 rows x 3 columns]

Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.

Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.


In [161]:
pd.merge(df1, df2, how='outer')


Out[161]:
   age  id     score
0   26   0  0.176817
1   26   0  0.948222
2   20   1  0.713290
3   20   1  0.974605
4   25   2  0.704577
5   25   2  0.760699
6   24   3       NaN

[7 rows x 3 columns]

The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.

Looking at the two datasets that we wish to merge:


In [162]:
segments.head(1)


Out[162]:
   mmsi         name  transit  segment  seg_length  avg_sog  min_sog  max_sog  \
0     1  Us Govt Ves        1        1         5.1     13.2      9.2     14.5   

   pdgt10        st_time       end_time  
0    96.5  2/10/09 16:03  2/10/09 16:27  

[1 rows x 11 columns]

In [163]:
vessels.head(1)


Out[163]:
      num_names                                              names sov  \
mmsi                                                                     
1             8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y   

         flag flag_type  num_loas                                    loa  \
mmsi                                                                       
1     Unknown   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0   

      max_loa  num_types                             type  
mmsi                                                       
1         156          4  Dredging/MilOps/Reserved/Towing  

[1 rows x 10 columns]

we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.


In [164]:
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')

In [165]:
segments_merged.head()


Out[165]:
   num_names                                              names sov     flag  \
0          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
1          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
2          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
3          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
4          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   

  flag_type  num_loas                                    loa  max_loa  \
0   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
1   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
2   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
3   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
4   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   

   num_types                             type  mmsi               name  \
0          4  Dredging/MilOps/Reserved/Towing     1        Us Govt Ves   
1          4  Dredging/MilOps/Reserved/Towing     1  Dredge Capt Frank   
2          4  Dredging/MilOps/Reserved/Towing     1      Us Gov Vessel   
3          4  Dredging/MilOps/Reserved/Towing     1      Us Gov Vessel   
4          4  Dredging/MilOps/Reserved/Towing     1  Dredge Capt Frank   

   transit  segment  seg_length  avg_sog  min_sog  max_sog  pdgt10  \
0        1        1         5.1     13.2      9.2     14.5    96.5   
1        1        1        13.5     18.6     10.4     20.6   100.0   
2        1        1         4.3     16.2     10.3     20.5   100.0   
3        2        1         9.2     15.4     14.5     16.1   100.0   
4        2        1         9.2     15.4     14.6     16.2   100.0   

         st_time       end_time  
0  2/10/09 16:03  2/10/09 16:27  
1   4/6/09 14:31   4/6/09 15:20  
2   4/6/09 14:36   4/6/09 14:55  
3  4/10/09 17:58  4/10/09 18:34  
4  4/10/09 17:59  4/10/09 18:35  

[5 rows x 21 columns]

In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.

Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table.

Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables:


In [166]:
vessels.merge(segments, left_index=True, right_on='mmsi').head()


Out[166]:
   num_names                                              names sov     flag  \
0          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
1          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
2          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
3          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
4          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   

  flag_type  num_loas                                    loa  max_loa  \
0   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
1   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
2   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
3   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
4   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   

   num_types                             type  mmsi               name  \
0          4  Dredging/MilOps/Reserved/Towing     1        Us Govt Ves   
1          4  Dredging/MilOps/Reserved/Towing     1  Dredge Capt Frank   
2          4  Dredging/MilOps/Reserved/Towing     1      Us Gov Vessel   
3          4  Dredging/MilOps/Reserved/Towing     1      Us Gov Vessel   
4          4  Dredging/MilOps/Reserved/Towing     1  Dredge Capt Frank   

   transit  segment  seg_length  avg_sog  min_sog  max_sog  pdgt10  \
0        1        1         5.1     13.2      9.2     14.5    96.5   
1        1        1        13.5     18.6     10.4     20.6   100.0   
2        1        1         4.3     16.2     10.3     20.5   100.0   
3        2        1         9.2     15.4     14.5     16.1   100.0   
4        2        1         9.2     15.4     14.6     16.2   100.0   

         st_time       end_time  
0  2/10/09 16:03  2/10/09 16:27  
1   4/6/09 14:31   4/6/09 15:20  
2   4/6/09 14:36   4/6/09 14:55  
3  4/10/09 17:58  4/10/09 18:34  
4  4/10/09 17:59  4/10/09 18:35  

[5 rows x 21 columns]

Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.


In [167]:
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()


Out[167]:
   num_names                                              names sov     flag  \
0          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
1          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
2          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
3          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   
4          8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y  Unknown   

  flag_type  num_loas                                    loa  max_loa  \
0   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
1   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
2   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
3   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   
4   Unknown         7  42.0/48.0/57.0/90.0/138.0/154.0/156.0      156   

   num_types                           type_x  mmsi               name  \
0          4  Dredging/MilOps/Reserved/Towing     1        Us Govt Ves   
1          4  Dredging/MilOps/Reserved/Towing     1  Dredge Capt Frank   
2          4  Dredging/MilOps/Reserved/Towing     1      Us Gov Vessel   
3          4  Dredging/MilOps/Reserved/Towing     1      Us Gov Vessel   
4          4  Dredging/MilOps/Reserved/Towing     1  Dredge Capt Frank   

   transit  segment  seg_length  avg_sog  min_sog  max_sog  pdgt10  \
0        1        1         5.1     13.2      9.2     14.5    96.5   
1        1        1        13.5     18.6     10.4     20.6   100.0   
2        1        1         4.3     16.2     10.3     20.5   100.0   
3        2        1         9.2     15.4     14.5     16.1   100.0   
4        2        1         9.2     15.4     14.6     16.2   100.0   

         st_time       end_time type_y  
0  2/10/09 16:03  2/10/09 16:27    foo  
1   4/6/09 14:31   4/6/09 15:20    foo  
2   4/6/09 14:36   4/6/09 14:55    foo  
3  4/10/09 17:58  4/10/09 18:34    foo  
4  4/10/09 17:59  4/10/09 18:35    foo  

[5 rows x 22 columns]

This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.

Concatenation

A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience functions c_ and r_:


In [168]:
np.concatenate([np.random.random(5), np.random.random(5)])


Out[168]:
array([ 0.4917765 ,  0.27676379,  0.3403297 ,  0.94561572,  0.09820804,
        0.89495602,  0.45218015,  0.15573864,  0.88671523,  0.98639378])

In [169]:
np.r_[np.random.random(5), np.random.random(5)]


Out[169]:
array([ 0.61759801,  0.78852623,  0.10187625,  0.92809311,  0.5016651 ,
        0.52203484,  0.58372555,  0.76932114,  0.17770492,  0.66359049])

In [170]:
np.c_[np.random.random(5), np.random.random(5)]


Out[170]:
array([[ 0.45807129,  0.27379738],
       [ 0.94707839,  0.77835089],
       [ 0.23536909,  0.36638039],
       [ 0.77183685,  0.87742435],
       [ 0.72104924,  0.82773601]])

This operation is also called binding or stacking.

With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.

Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.


In [171]:
mb1 = pd.read_excel('data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape


Out[171]:
((272, 1), (288, 1))

In [172]:
mb1.head()


Out[172]:
                                                                                         1
0                                                                                         
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera    7
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus          2
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus              3
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum          3
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella  7

[5 rows x 1 columns]

Let's give the index and columns meaningful labels:


In [173]:
mb1.columns = mb2.columns = ['Count']

In [174]:
mb1.index.name = mb2.index.name = 'Taxon'

In [175]:
mb1.head()


Out[175]:
                                                                                         Count
Taxon                                                                                         
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera        7
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus              2
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus                  3
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum              3
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella      7

[5 rows x 1 columns]

The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.

*(Source: Wikipedia)*

In [176]:
mb1.index[:3]


Out[176]:
Index([u'Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera', u'Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus', u'Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus'], dtype='object')

In [177]:
mb1.index.is_unique


Out[177]:
True

If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated:


In [178]:
pd.concat([mb1, mb2], axis=0).shape


Out[178]:
(560, 1)

However, the index is no longer unique, due to overlap between the two DataFrames.


In [179]:
pd.concat([mb1, mb2], axis=0).index.is_unique


Out[179]:
False

Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.


In [180]:
pd.concat([mb1, mb2], axis=1).shape


Out[180]:
(438, 2)

In [181]:
pd.concat([mb1, mb2], axis=1).head()


Out[181]:
                                                                                            Count  \
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus                    NaN   
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera               NaN   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera           7   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus    NaN   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera       NaN   

                                                                                            Count  
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus                      2  
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera                14  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera          23  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus      1  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera         2  

[5 rows x 2 columns]

In [182]:
pd.concat([mb1, mb2], axis=1).values[:5]


Out[182]:
array([[ nan,   2.],
       [ nan,  14.],
       [  7.,  23.],
       [ nan,   1.],
       [ nan,   2.]])

If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.


In [183]:
pd.concat([mb1, mb2], axis=1, join='inner').head()


Out[183]:
                                                                                         Count  \
Taxon                                                                                            
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera        7   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus              2   
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus                  3   
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum              3   
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella      7   

                                                                                         Count  
Taxon                                                                                           
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera       23  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Pyrodictiaceae Pyrolobus              2  
Archaea "Crenarchaeota" Thermoprotei Sulfolobales Sulfolobaceae Stygiolobus                 10  
Archaea "Crenarchaeota" Thermoprotei Thermoproteales Thermofilaceae Thermofilum              9  
Archaea "Euryarchaeota" "Methanomicrobia" Methanocellales Methanocellaceae Methanocella      9  

[5 rows x 2 columns]

If we wanted to use the second table to fill values absent from the first table, we could use combine_first.


In [184]:
mb1.combine_first(mb2).head()


Out[184]:
                                                                                            Count
Taxon                                                                                            
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus                      2
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera                14
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera           7
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus      1
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera         2

[5 rows x 1 columns]

Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict.


In [185]:
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()


Out[185]:
                                                                                            patient1  \
                                                                                               Count   
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus                       NaN   
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera                  NaN   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera              7   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus       NaN   
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera          NaN   

                                                                                            patient2  
                                                                                               Count  
Archaea "Crenarchaeota" Thermoprotei Acidilobales Acidilobaceae Acidilobus                         2  
Archaea "Crenarchaeota" Thermoprotei Acidilobales Caldisphaeraceae Caldisphaera                   14  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Ignisphaera             23  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Sulfophobococcus         1  
Archaea "Crenarchaeota" Thermoprotei Desulfurococcales Desulfurococcaceae Thermosphaera            2  

[5 rows x 2 columns]

If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.

Reshaping DataFrame objects

In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.

This dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.

  • Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
  • Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
  • TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began

In [186]:
cdystonia = pd.read_csv("data/cdystonia.csv", index_col=None)
cdystonia.head()


Out[186]:
   patient  obs  week  site  id  treat  age sex  twstrs
0        1    1     0     1   1  5000U   65   F      32
1        1    2     2     1   1  5000U   65   F      30
2        1    3     4     1   1  5000U   65   F      24
3        1    4     8     1   1  5000U   65   F      37
4        1    5    12     1   1  5000U   65   F      39

[5 rows x 9 columns]

This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing mutliple measurements.

The stack method rotates the data frame so that columns are represented in rows:


In [187]:
stacked = cdystonia.stack()
stacked


Out[187]:
0  patient        1
   obs            1
   week           0
   site           1
   id             1
   treat      5000U
   age           65
   sex            F
...
630  obs           6
     week         16
     site          9
     id           11
     treat     5000U
     age          57
     sex           M
     twstrs       51
Length: 5679, dtype: object

To complement this, unstack pivots from rows back to columns.


In [188]:
stacked.unstack().head()


Out[188]:
  patient obs week site id  treat age sex twstrs
0       1   1    0    1  1  5000U  65   F     32
1       1   2    2    1  1  5000U  65   F     30
2       1   3    4    1  1  5000U  65   F     24
3       1   4    8    1  1  5000U  65   F     37
4       1   5   12    1  1  5000U  65   F     39

[5 rows x 9 columns]

For this dataset, it makes sense to create a hierarchical index based on the patient and observation:


In [189]:
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()


Out[189]:
             week  site  id  treat  age sex  twstrs
patient obs                                        
1       1       0     1   1  5000U   65   F      32
        2       2     1   1  5000U   65   F      30
        3       4     1   1  5000U   65   F      24
        4       8     1   1  5000U   65   F      37
        5      12     1   1  5000U   65   F      39

[5 rows x 7 columns]

In [190]:
cdystonia2.index.is_unique


Out[190]:
True

If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.


In [191]:
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()


Out[191]:
obs       1   2   3   4   5   6
patient                        
1        32  30  24  37  39  36
2        60  26  27  41  65  67
3        44  20  23  26  35  35
4        53  61  64  62 NaN NaN
5        53  35  48  49  41  51

[5 rows x 6 columns]

In [192]:
cdystonia_long = cdystonia[['patient','site','id','treat','age','sex']].drop_duplicates().merge(
                    twstrs_wide, right_index=True, left_on='patient', how='inner').head()
cdystonia_long


Out[192]:
    patient  site  id    treat  age sex   1   2   3   4   5   6
0         1     1   1    5000U   65   F  32  30  24  37  39  36
6         2     1   2   10000U   70   F  60  26  27  41  65  67
12        3     1   3    5000U   64   F  44  20  23  26  35  35
18        4     1   4  Placebo   59   F  53  61  64  62 NaN NaN
22        5     1   5   10000U   76   F  53  35  48  49  41  51

[5 rows x 12 columns]

A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:


In [193]:
cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs'].unstack('week').head()


Out[193]:
week                             0   2   4   8   12  16
patient site id treat   age sex                        
1       1    1  5000U   65  F    32  30  24  37  39  36
2       1    2  10000U  70  F    60  26  27  41  65  67
3       1    3  5000U   64  F    44  20  23  26  35  35
4       1    4  Placebo 59  F    53  61  64  62 NaN NaN
5       1    5  10000U  76  F    53  35  48  49  41  51

[5 rows x 6 columns]

To convert our "wide" format back to long, we can use the melt function, appropriately parameterized:


In [194]:
pd.melt(cdystonia_long, id_vars=['patient','site','id','treat','age','sex'], 
        var_name='obs', value_name='twsters').head()


Out[194]:
   patient  site  id    treat  age sex  obs  twsters
0        1     1   1    5000U   65   F    1       32
1        2     1   2   10000U   70   F    1       60
2        3     1   3    5000U   64   F    1       44
3        4     1   4  Placebo   59   F    1       53
4        5     1   5   10000U   76   F    1       53

[5 rows x 8 columns]

This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.

The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.

Data transformation

There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" that include tasks such as removing duplicate values, replacing values, and grouping values.

Dealing with duplicates

We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name:


In [195]:
vessels.duplicated(cols='names')


Out[195]:
mmsi
1       False
9       False
21      False
74      False
103     False
310     False
3011    False
4731    False
...
888888882     True
888888888    False
900000000    False
919191919    False
967191190     True
975318642     True
987654321    False
999999999     True
Length: 10771, dtype: bool

In [196]:
vessels.drop_duplicates(['names'])


Out[196]:
        num_names                                              names sov  \
mmsi                                                                       
1               8  Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...   Y   
9               3                         000000009/Raven/Shearwater   N   
21              1                                      Us Gov Vessel   Y   
74              2                                  Mcfaul/Sarah Bell   N   
103             3           Ron G/Us Navy Warship 103/Us Warship 103   Y   
310             1                                           Arabella   N   
3011            1                                         Charleston   N   
4731            1                                          000004731   N   
15151           2                             R L Enterkin/Us Vessel   N   
46809           1                                      Island Trader   N   
80404           1                                         Donnamarie   N   
82003           1                                             Alexis   N   
298716          1                                            Mitchel   N   
366235          1                                       Cape Domingo   N   
439541          2                            Canadian Warship 711/L3   Y   
453556          1                                     Us Govt Vessel   N   
505843          1                                          I.w.haile   N   
527918          1                                     Salvage Master   N   
565026          1                                             Honcho   N   
572329          1                                          Alexandra   N   
              ...                                                ... ...   

                             flag flag_type  num_loas  \
mmsi                                                    
1                         Unknown   Unknown         7   
9                         Unknown   Unknown         2   
21                        Unknown   Unknown         1   
74                        Unknown   Unknown         1   
103                       Unknown   Unknown         2   
310                      Bermuda    Foreign         1   
3011                    Anguilla    Foreign         1   
4731         Yemen (Republic of)    Foreign         1   
15151                     Unknown   Unknown         2   
46809       Syrian Arab Republic    Foreign         1   
80404                     Unknown   Unknown         1   
82003                     Unknown   Unknown         1   
298716                    Unknown   Unknown         1   
366235  United States of America   Domestic         1   
439541                    Unknown   Unknown         2   
453556                    Unknown   Unknown         1   
505843                    Unknown   Unknown         1   
527918                    Unknown   Unknown         1   
565026    Singapore (Republic of)   Foreign         1   
572329                    Tuvalu    Foreign         1   
                              ...       ...       ...   

                                          loa  max_loa  num_types  \
mmsi                                                                
1       42.0/48.0/57.0/90.0/138.0/154.0/156.0      156          4   
9                                   50.0/62.0       62          2   
21                                      208.0      208          1   
74                                      155.0      155          1   
103                                26.0/155.0      155          2   
310                                      47.0       47          1   
3011                                    160.0      160          1   
4731                                     30.0       30          1   
15151                              60.0/175.0      175          1   
46809                                    22.0       22          1   
80404                                    29.0       29          1   
82003                                    29.0       29          2   
298716                                   35.0       35          1   
366235                                  207.0      207          1   
439541                               0.0/55.0       55          2   
453556                                  208.0      208          1   
505843                                   20.0       20          1   
527918                                   20.0       20          1   
565026                                   32.0       32          1   
572329                                   40.0       40          1   
                                          ...      ...        ...   

                                   type  
mmsi                                     
1       Dredging/MilOps/Reserved/Towing  
9                          Pleasure/Tug  
21                              Unknown  
74                              Unknown  
103                      Tanker/Unknown  
310                             Unknown  
3011                              Other  
4731                            Unknown  
15151                               Tug  
46809                            Towing  
80404                          Pleasure  
82003                  Fishing/Pleasure  
298716                           Towing  
366235                            Cargo  
439541                   MilOps/Unknown  
453556                          Unknown  
505843                              WIG  
527918                          Fishing  
565026                           Towing  
572329                           BigTow  
                                    ...  

[10253 rows x 10 columns]

Value replacement

Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:


In [197]:
cdystonia.treat.value_counts()


Out[197]:
10000U     213
5000U      211
Placebo    207
dtype: int64

A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.


In [198]:
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}

In [199]:
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment


Out[199]:
0    1
1    1
2    1
3    1
4    1
5    1
6    2
7    2
...
623    2
624    2
625    2
626    1
627    1
628    1
629    1
630    1
Name: treatment, Length: 631, dtype: int64

Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.


In [200]:
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})


Out[200]:
patient  obs
1        1      1
         2      1
         3      1
         4      1
         5      1
         6      1
2        1      2
         2      2
...
108      4      2
         5      2
         6      2
109      1      1
         2      1
         4      1
         5      1
         6      1
Name: treat, Length: 631, dtype: int64

Data aggregation and GroupBy operations

One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:

  • aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results
  • slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)
  • group-wise transformation, such as standardization/normalization

In [201]:
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)

This grouped dataset is hard to visualize


In [202]:
cdystonia_grouped


Out[202]:
<pandas.core.groupby.DataFrameGroupBy object at 0x10899af90>

However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:


In [203]:
for patient, group in cdystonia_grouped:
    print patient
    print group
    print


1
   patient  obs  week  site  id  treat  age sex  twstrs  treatment
0        1    1     0     1   1  5000U   65   F      32          1
1        1    2     2     1   1  5000U   65   F      30          1
2        1    3     4     1   1  5000U   65   F      24          1
3        1    4     8     1   1  5000U   65   F      37          1
4        1    5    12     1   1  5000U   65   F      39          1
5        1    6    16     1   1  5000U   65   F      36          1

[6 rows x 10 columns]

2
    patient  obs  week  site  id   treat  age sex  twstrs  treatment
6         2    1     0     1   2  10000U   70   F      60          2
7         2    2     2     1   2  10000U   70   F      26          2
8         2    3     4     1   2  10000U   70   F      27          2
9         2    4     8     1   2  10000U   70   F      41          2
10        2    5    12     1   2  10000U   70   F      65          2
11        2    6    16     1   2  10000U   70   F      67          2

[6 rows x 10 columns]

3
    patient  obs  week  site  id  treat  age sex  twstrs  treatment
12        3    1     0     1   3  5000U   64   F      44          1
13        3    2     2     1   3  5000U   64   F      20          1
14        3    3     4     1   3  5000U   64   F      23          1
15        3    4     8     1   3  5000U   64   F      26          1
16        3    5    12     1   3  5000U   64   F      35          1
17        3    6    16     1   3  5000U   64   F      35          1

[6 rows x 10 columns]

4
    patient  obs  week  site  id    treat  age sex  twstrs  treatment
18        4    1     0     1   4  Placebo   59   F      53          0
19        4    2     2     1   4  Placebo   59   F      61          0
20        4    3     4     1   4  Placebo   59   F      64          0
21        4    4     8     1   4  Placebo   59   F      62          0

[4 rows x 10 columns]

5
    patient  obs  week  site  id   treat  age sex  twstrs  treatment
22        5    1     0     1   5  10000U   76   F      53          2
23        5    2     2     1   5  10000U   76   F      35          2
24        5    3     4     1   5  10000U   76   F      48          2
25        5    4     8     1   5  10000U   76   F      49          2
26        5    5    12     1   5  10000U   76   F      41          2
27        5    6    16     1   5  10000U   76   F      51          2

[6 rows x 10 columns]

6
    patient  obs  week  site  id   treat  age sex  twstrs  treatment
28        6    1     0     1   6  10000U   59   F      49          2
29        6    2     2     1   6  10000U   59   F      34          2
30        6    3     4     1   6  10000U   59   F      43          2
31        6    4     8     1   6  10000U   59   F      48          2
32        6    5    12     1   6  10000U   59   F      48          2
33        6    6    16     1   6  10000U   59   F      51          2

[6 rows x 10 columns]

7
    patient  obs  week  site  id  treat  age sex  twstrs  treatment
34        7    1     0     1   7  5000U   72   M      42          1
35        7    2     2     1   7  5000U   72   M      32          1
36        7    3     4     1   7  5000U   72   M      32          1
37        7    4     8     1   7  5000U   72   M      43          1
38        7    5    12     1   7  5000U   72   M      42          1
39        7    6    16     1   7  5000U   72   M      46          1

[6 rows x 10 columns]

8
    patient  obs  week  site  id    treat  age sex  twstrs  treatment
40        8    1     0     1   8  Placebo   40   M      34          0
41        8    2     2     1   8  Placebo   40   M      33          0
42        8    3     4     1   8  Placebo   40   M      21          0
43        8    4     8     1   8  Placebo   40   M      27          0
44        8    5    12     1   8  Placebo   40   M      32          0
45        8    6    16     1   8  Placebo   40   M      38          0

[6 rows x 10 columns]

9
    patient  obs  week  site  id  treat  age sex  twstrs  treatment
46        9    1     0     1   9  5000U   52   F      41          1
47        9    2     2     1   9  5000U   52   F      32          1
48        9    3     4     1   9  5000U   52   F      34          1
49        9    4     8     1   9  5000U   52   F      35          1
50        9    5    12     1   9  5000U   52   F      37          1
51        9    6    16     1   9  5000U   52   F      36          1

[6 rows x 10 columns]

10
    patient  obs  week  site  id    treat  age sex  twstrs  treatment
52       10    1     0     1  10  Placebo   47   M      27          0
53       10    2     2     1  10  Placebo   47   M      10          0
54       10    3     4     1  10  Placebo   47   M      31          0
55       10    4     8     1  10  Placebo   47   M      32          0
56       10    5    12     1  10  Placebo   47   M       6          0
57       10    6    16     1  10  Placebo   47   M      14          0

[6 rows x 10 columns]

11
    patient  obs  week  site  id   treat  age sex  twstrs  treatment
58       11    1     0     1  11  10000U   57   F      48          2
59       11    2     2     1  11  10000U   57   F      41          2
60       11    3     4     1  11  10000U   57   F      32          2
61       11    4     8     1  11  10000U   57   F      35          2
62       11    5    12     1  11  10000U   57   F      57          2
63       11    6    16     1  11  10000U   57   F      51          2

[6 rows x 10 columns]

12
    patient  obs  week  site  id    treat  age sex  twstrs  treatment
64       12    1     0     1  12  Placebo   47   F      34          0
65       12    2     2     1  12  Placebo   47   F      19          0
66       12    3     4     1  12  Placebo   47   F      21          0
67       12    4     8     1  12  Placebo   47   F      24          0
68       12    5    12     1  12  Placebo   47   F      28          0
69       12    6    16     1  12  Placebo   47   F      28          0

[6 rows x 10 columns]

13
    patient  obs  week  site  id    treat  age sex  twstrs  treatment
70       13    1     0     2   1  Placebo   70   F      49          0
71       13    2     2     2   1  Placebo   70   F      47          0
72       13    3     4     2   1  Placebo   70   F      44          0
73       13    4     8     2   1  Placebo   70   F      48          0
74       13    5    12     2   1  Placebo   70   F      44          0
75       13    6    16     2   1  Placebo   70   F      44          0

[6 rows x 10 columns]

14
    patient  obs  week  site  id  treat  age sex  twstrs  treatment
76       14    1     0     2   2  5000U   49   F      46          1
77       14    2     2     2   2  5000U   49   F      35          1
78       14    3     4     2   2  5000U   49   F      45          1
79       14    4     8     2   2  5000U   49   F      49          1
80       14    5    12     2   2  5000U   49   F      53          1
81       14    6    16     2   2  5000U   49   F      56          1

[6 rows x 10 columns]

15
    patient  obs  week  site  id   treat  age sex  twstrs  treatment
82       15    1     0     2   3  10000U   59   F      56          2
83       15    2     2     2   3  10000U   59   F      44          2
84       15    3     4     2   3  10000U   59   F      48          2
85       15    4     8     2   3  10000U   59   F      54          2
86       15    5    12     2   3  10000U   59   F      49          2
87       15    6    16     2   3  10000U   59   F      60          2

[6 rows x 10 columns]

16
    patient  obs  week  site  id  treat  age sex  twstrs  treatment
88       16    1     0     2   4  5000U   64   M      59          1
89       16    2     2     2   4  5000U   64   M      48          1
90       16    3     4     2   4  5000U   64   M      56          1
91       16    4     8     2   4  5000U   64   M      55          1
92       16    5    12     2   4  5000U   64   M      57          1
93       16    6    16     2   4  5000U   64   M      58          1

[6 rows x 10 columns]

17
    patient  obs  week  site  id   treat  age sex  twstrs  treatment
94       17    1     0     2   5  10000U   45   F      62          2
95       17    2     2     2   5  10000U   45   F      60          2
96       17    3     4     2   5  10000U   45   F      60          2
97       17    4     8     2   5  10000U   45   F      64          2
98       17    5    12     2   5  10000U   45   F      67          2
99       17    6    16     2   5  10000U   45   F      66          2

[6 rows x 10 columns]

18
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
100       18    1     0     2   6  Placebo   66   F      50          0
101       18    2     2     2   6  Placebo   66   F      53          0
102       18    3     4     2   6  Placebo   66   F      52          0
103       18    4     8     2   6  Placebo   66   F      57          0
104       18    5    12     2   6  Placebo   66   F      61          0
105       18    6    16     2   6  Placebo   66   F      54          0

[6 rows x 10 columns]

19
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
106       19    1     0     2   7  10000U   49   F      42          2
107       19    2     2     2   7  10000U   49   F      42          2
108       19    3     4     2   7  10000U   49   F      43          2
109       19    4     8     2   7  10000U   49   F      33          2
110       19    5    12     2   7  10000U   49   F      37          2
111       19    6    16     2   7  10000U   49   F      43          2

[6 rows x 10 columns]

20
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
112       20    1     0     2   8  Placebo   54   F      53          0
113       20    2     2     2   8  Placebo   54   F      56          0
114       20    3     4     2   8  Placebo   54   F      52          0
115       20    4     8     2   8  Placebo   54   F      54          0
116       20    5    12     2   8  Placebo   54   F      55          0
117       20    6    16     2   8  Placebo   54   F      51          0

[6 rows x 10 columns]

21
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
118       21    1     0     2   9  5000U   47   F      67          1
119       21    2     2     2   9  5000U   47   F      64          1
120       21    3     4     2   9  5000U   47   F      65          1
121       21    4     8     2   9  5000U   47   F      64          1
122       21    5    12     2   9  5000U   47   F      62          1
123       21    6    16     2   9  5000U   47   F      64          1

[6 rows x 10 columns]

22
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
124       22    1     0     2  10  Placebo   31   M      44          0
125       22    2     2     2  10  Placebo   31   M      40          0
126       22    3     4     2  10  Placebo   31   M      32          0
127       22    4     8     2  10  Placebo   31   M      36          0
128       22    5    12     2  10  Placebo   31   M      42          0
129       22    6    16     2  10  Placebo   31   M      43          0

[6 rows x 10 columns]

23
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
130       23    1     0     2  11  10000U   53   F      65          2
131       23    2     2     2  11  10000U   53   F      58          2
132       23    3     4     2  11  10000U   53   F      55          2
133       23    5    12     2  11  10000U   53   F      56          2
134       23    6    16     2  11  10000U   53   F      60          2

[5 rows x 10 columns]

24
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
135       24    1     0     2  12  5000U   61   M      56          1
136       24    2     2     2  12  5000U   61   M      54          1
137       24    3     4     2  12  5000U   61   M      52          1
138       24    4     8     2  12  5000U   61   M      48          1
139       24    5    12     2  12  5000U   61   M      52          1
140       24    6    16     2  12  5000U   61   M      53          1

[6 rows x 10 columns]

25
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
141       25    1     0     2  13  Placebo   40   M      30          0
142       25    2     2     2  13  Placebo   40   M      33          0
143       25    3     4     2  13  Placebo   40   M      25          0
144       25    4     8     2  13  Placebo   40   M      29          0
145       25    5    12     2  13  Placebo   40   M      32          0
146       25    6    16     2  13  Placebo   40   M      32          0

[6 rows x 10 columns]

26
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
147       26    1     0     2  14  5000U   67   M      47          1
148       26    3     4     2  14  5000U   67   M      54          1
149       26    4     8     2  14  5000U   67   M      43          1
150       26    5    12     2  14  5000U   67   M      46          1
151       26    6    16     2  14  5000U   67   M      50          1

[5 rows x 10 columns]

27
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
152       27    1     0     3   1  10000U   54   F      50          2
153       27    2     2     3   1  10000U   54   F      43          2
154       27    3     4     3   1  10000U   54   F      51          2
155       27    4     8     3   1  10000U   54   F      46          2
156       27    5    12     3   1  10000U   54   F      49          2
157       27    6    16     3   1  10000U   54   F      53          2

[6 rows x 10 columns]

28
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
158       28    1     0     3   2  Placebo   41   F      34          0
159       28    2     2     3   2  Placebo   41   F      29          0
160       28    3     4     3   2  Placebo   41   F      27          0
161       28    4     8     3   2  Placebo   41   F      21          0
162       28    5    12     3   2  Placebo   41   F      22          0
163       28    6    16     3   2  Placebo   41   F      22          0

[6 rows x 10 columns]

29
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
164       29    1     0     3   3  5000U   66   M      39          1
165       29    2     2     3   3  5000U   66   M      41          1
166       29    3     4     3   3  5000U   66   M      33          1
167       29    4     8     3   3  5000U   66   M      39          1
168       29    5    12     3   3  5000U   66   M      37          1
169       29    6    16     3   3  5000U   66   M      37          1

[6 rows x 10 columns]

30
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
170       30    1     0     3   4  Placebo   68   F      43          0
171       30    2     2     3   4  Placebo   68   F      31          0
172       30    3     4     3   4  Placebo   68   F      29          0
173       30    4     8     3   4  Placebo   68   F      28          0
174       30    5    12     3   4  Placebo   68   F      33          0
175       30    6    16     3   4  Placebo   68   F      38          0

[6 rows x 10 columns]

31
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
176       31    1     0     3   5  10000U   41   F      46          2
177       31    2     2     3   5  10000U   41   F      26          2
178       31    3     4     3   5  10000U   41   F      29          2
179       31    4     8     3   5  10000U   41   F      33          2
180       31    5    12     3   5  10000U   41   F      45          2
181       31    6    16     3   5  10000U   41   F      56          2

[6 rows x 10 columns]

32
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
182       32    1     0     3   6  5000U   77   M      52          1
183       32    2     2     3   6  5000U   77   M      44          1
184       32    3     4     3   6  5000U   77   M      47          1
185       32    4     8     3   6  5000U   77   M      50          1
186       32    5    12     3   6  5000U   77   M      50          1
187       32    6    16     3   6  5000U   77   M      49          1

[6 rows x 10 columns]

33
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
188       33    1     0     3   7  10000U   41   M      38          2
189       33    2     2     3   7  10000U   41   M      19          2
190       33    3     4     3   7  10000U   41   M      20          2
191       33    4     8     3   7  10000U   41   M      27          2
192       33    5    12     3   7  10000U   41   M      29          2
193       33    6    16     3   7  10000U   41   M      32          2

[6 rows x 10 columns]

34
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
194       34    1     0     3   8  Placebo   56   M      33          0
195       34    2     2     3   8  Placebo   56   M      38          0
196       34    3     4     3   8  Placebo   56   M      40          0
197       34    4     8     3   8  Placebo   56   M      48          0
198       34    5    12     3   8  Placebo   56   M      49          0
199       34    6    16     3   8  Placebo   56   M      44          0

[6 rows x 10 columns]

35
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
200       35    1     0     3   9  5000U   46   F      28          1
201       35    2     2     3   9  5000U   46   F      16          1
202       35    3     4     3   9  5000U   46   F      11          1
203       35    4     8     3   9  5000U   46   F       7          1
204       35    5    12     3   9  5000U   46   F      13          1
205       35    6    16     3   9  5000U   46   F      21          1

[6 rows x 10 columns]

36
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
206       36    1     0     3  10  10000U   46   F      34          2
207       36    2     2     3  10  10000U   46   F      23          2
208       36    3     4     3  10  10000U   46   F      16          2
209       36    4     8     3  10  10000U   46   F      15          2
210       36    5    12     3  10  10000U   46   F      17          2
211       36    6    16     3  10  10000U   46   F      29          2

[6 rows x 10 columns]

37
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
212       37    1     0     3  11  Placebo   47   F      39          0
213       37    2     2     3  11  Placebo   47   F      37          0
214       37    3     4     3  11  Placebo   47   F      39          0
215       37    4     8     3  11  Placebo   47   F      39          0
216       37    5    12     3  11  Placebo   47   F      45          0
217       37    6    16     3  11  Placebo   47   F      43          0

[6 rows x 10 columns]

38
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
218       38    1     0     3  12  5000U   35   M      29          1
219       38    2     2     3  12  5000U   35   M      42          1
220       38    3     4     3  12  5000U   35   M      35          1
221       38    4     8     3  12  5000U   35   M      24          1
222       38    5    12     3  12  5000U   35   M      29          1
223       38    6    16     3  12  5000U   35   M      42          1

[6 rows x 10 columns]

39
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
224       39    1     0     4   1  Placebo   58   M      52          0
225       39    2     2     4   1  Placebo   58   M      55          0
226       39    3     4     4   1  Placebo   58   M      51          0
227       39    4     8     4   1  Placebo   58   M      52          0
228       39    5    12     4   1  Placebo   58   M      54          0
229       39    6    16     4   1  Placebo   58   M      57          0

[6 rows x 10 columns]

40
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
230       40    1     0     4   2  5000U   62   F      52          1
231       40    2     2     4   2  5000U   62   F      30          1
232       40    3     4     4   2  5000U   62   F      43          1
233       40    4     8     4   2  5000U   62   F      45          1
234       40    5    12     4   2  5000U   62   F      47          1
235       40    6    16     4   2  5000U   62   F      46          1

[6 rows x 10 columns]

41
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
236       41    1     0     4   3  10000U   73   F      54          2
237       41    2     2     4   3  10000U   73   F      52          2
238       41    3     4     4   3  10000U   73   F      52          2
239       41    4     8     4   3  10000U   73   F      54          2
240       41    5    12     4   3  10000U   73   F      51          2
241       41    6    16     4   3  10000U   73   F      57          2

[6 rows x 10 columns]

42
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
242       42    1     0     4   4  10000U   52   F      52          2
243       42    2     2     4   4  10000U   52   F      44          2
244       42    3     4     4   4  10000U   52   F      33          2
245       42    4     8     4   4  10000U   52   F      54          2
246       42    5    12     4   4  10000U   52   F      46          2
247       42    6    16     4   4  10000U   52   F      47          2

[6 rows x 10 columns]

43
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
248       43    1     0     4   5  Placebo   53   F      47          0
249       43    2     2     4   5  Placebo   53   F      45          0
250       43    3     4     4   5  Placebo   53   F      41          0
251       43    4     8     4   5  Placebo   53   F      45          0
252       43    5    12     4   5  Placebo   53   F      43          0
253       43    6    16     4   5  Placebo   53   F      41          0

[6 rows x 10 columns]

44
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
254       44    1     0     4   6  5000U   69   M      44          1
255       44    2     2     4   6  5000U   69   M      34          1
256       44    3     4     4   6  5000U   69   M      29          1
257       44    4     8     4   6  5000U   69   M      28          1
258       44    5    12     4   6  5000U   69   M      35          1
259       44    6    16     4   6  5000U   69   M      41          1

[6 rows x 10 columns]

45
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
260       45    1     0     4   7  Placebo   55   M      42          0
261       45    2     2     4   7  Placebo   55   M      39          0
262       45    3     4     4   7  Placebo   55   M      38          0
263       45    4     8     4   7  Placebo   55   M      47          0
264       45    5    12     4   7  Placebo   55   M      39          0
265       45    6    16     4   7  Placebo   55   M      39          0

[6 rows x 10 columns]

46
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
266       46    1     0     4   8  10000U   52   F      42          2
267       46    2     2     4   8  10000U   52   F      14          2
268       46    3     4     4   8  10000U   52   F       9          2
269       46    4     8     4   8  10000U   52   F       9          2
270       46    5    12     4   8  10000U   52   F      16          2
271       46    6    16     4   8  10000U   52   F      33          2

[6 rows x 10 columns]

47
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
272       47    1     0     5   1  10000U   51   F      44          2
273       47    2     2     5   1  10000U   51   F      34          2
274       47    3     4     5   1  10000U   51   F      32          2
275       47    4     8     5   1  10000U   51   F      35          2
276       47    5    12     5   1  10000U   51   F      54          2
277       47    6    16     5   1  10000U   51   F      53          2

[6 rows x 10 columns]

48
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
278       48    1     0     5   2  Placebo   56   F      60          0
279       48    2     2     5   2  Placebo   56   F      57          0
280       48    3     4     5   2  Placebo   56   F      53          0
281       48    4     8     5   2  Placebo   56   F      52          0
282       48    5    12     5   2  Placebo   56   F      53          0
283       48    6    16     5   2  Placebo   56   F      58          0

[6 rows x 10 columns]

49
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
284       49    1     0     5   3  5000U   65   F      60          1
285       49    2     2     5   3  5000U   65   F      53          1
286       49    3     4     5   3  5000U   65   F      55          1
287       49    4     8     5   3  5000U   65   F      62          1
288       49    5    12     5   3  5000U   65   F      67          1

[5 rows x 10 columns]

50
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
289       50    1     0     5   4  10000U   35   F      50          2
290       50    2     2     5   4  10000U   35   F      50          2
291       50    4     8     5   4  10000U   35   F      46          2
292       50    5    12     5   4  10000U   35   F      50          2
293       50    6    16     5   4  10000U   35   F      57          2

[5 rows x 10 columns]

51
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
294       51    1     0     5   5  5000U   43   M      38          1
295       51    2     2     5   5  5000U   43   M      27          1
296       51    3     4     5   5  5000U   43   M      16          1
297       51    4     8     5   5  5000U   43   M      19          1
298       51    5    12     5   5  5000U   43   M      23          1
299       51    6    16     5   5  5000U   43   M      26          1

[6 rows x 10 columns]

52
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
300       52    1     0     5   6  Placebo   61   M      44          0
301       52    3     4     5   6  Placebo   61   M      46          0
302       52    4     8     5   6  Placebo   61   M      26          0
303       52    5    12     5   6  Placebo   61   M      30          0
304       52    6    16     5   6  Placebo   61   M      34          0

[5 rows x 10 columns]

53
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
305       53    1     0     6   1  Placebo   43   M      54          0
306       53    2     2     6   1  Placebo   43   M      53          0
307       53    3     4     6   1  Placebo   43   M      51          0
308       53    4     8     6   1  Placebo   43   M      56          0
309       53    5    12     6   1  Placebo   43   M      39          0
310       53    6    16     6   1  Placebo   43   M       9          0

[6 rows x 10 columns]

54
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
311       54    1     0     6   2  10000U   64   F      54          2
312       54    2     2     6   2  10000U   64   F      32          2
313       54    3     4     6   2  10000U   64   F      40          2
314       54    4     8     6   2  10000U   64   F      52          2
315       54    5    12     6   2  10000U   64   F      42          2
316       54    6    16     6   2  10000U   64   F      47          2

[6 rows x 10 columns]

55
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
317       55    1     0     6   3  5000U   57   M      56          1
318       55    2     2     6   3  5000U   57   M      55          1
319       55    3     4     6   3  5000U   57   M      44          1
320       55    4     8     6   3  5000U   57   M      50          1
321       55    5    12     6   3  5000U   57   M      53          1
322       55    6    16     6   3  5000U   57   M      52          1

[6 rows x 10 columns]

56
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
323       56    1     0     6   4  5000U   60   F      51          1
324       56    2     2     6   4  5000U   60   F      50          1
325       56    3     4     6   4  5000U   60   F      50          1
326       56    4     8     6   4  5000U   60   F      56          1
327       56    5    12     6   4  5000U   60   F      59          1
328       56    6    16     6   4  5000U   60   F      53          1

[6 rows x 10 columns]

57
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
329       57    1     0     6   5  10000U   44   F      53          2
330       57    2     2     6   5  10000U   44   F      56          2
331       57    3     4     6   5  10000U   44   F      47          2
332       57    4     8     6   5  10000U   44   F      53          2
333       57    5    12     6   5  10000U   44   F      51          2
334       57    6    16     6   5  10000U   44   F      51          2

[6 rows x 10 columns]

58
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
335       58    1     0     6   6  Placebo   41   F      36          0
336       58    2     2     6   6  Placebo   41   F      29          0
337       58    3     4     6   6  Placebo   41   F      24          0
338       58    4     8     6   6  Placebo   41   F      32          0
339       58    5    12     6   6  Placebo   41   F      45          0
340       58    6    16     6   6  Placebo   41   F      36          0

[6 rows x 10 columns]

59
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
341       59    1     0     6   7  5000U   51   F      59          1
342       59    2     2     6   7  5000U   51   F      53          1
343       59    3     4     6   7  5000U   51   F      45          1
344       59    4     8     6   7  5000U   51   F      44          1
345       59    5    12     6   7  5000U   51   F      50          1
346       59    6    16     6   7  5000U   51   F      48          1

[6 rows x 10 columns]

60
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
347       60    1     0     6   8  Placebo   57   F      49          0
348       60    2     2     6   8  Placebo   57   F      50          0
349       60    3     4     6   8  Placebo   57   F      48          0
350       60    4     8     6   8  Placebo   57   F      56          0
351       60    5    12     6   8  Placebo   57   F      49          0
352       60    6    16     6   8  Placebo   57   F      57          0

[6 rows x 10 columns]

61
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
353       61    1     0     6   9  10000U   42   F      50          2
354       61    2     2     6   9  10000U   42   F      38          2
355       61    3     4     6   9  10000U   42   F      42          2
356       61    4     8     6   9  10000U   42   F      43          2
357       61    5    12     6   9  10000U   42   F      42          2
358       61    6    16     6   9  10000U   42   F      46          2

[6 rows x 10 columns]

62
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
359       62    1     0     6  10  Placebo   48   F      46          0
360       62    2     2     6  10  Placebo   48   F      48          0
361       62    3     4     6  10  Placebo   48   F      46          0
362       62    4     8     6  10  Placebo   48   F      57          0
363       62    5    12     6  10  Placebo   48   F      57          0
364       62    6    16     6  10  Placebo   48   F      49          0

[6 rows x 10 columns]

63
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
365       63    1     0     6  11  10000U   57   M      55          2
366       63    2     2     6  11  10000U   57   M      34          2
367       63    3     4     6  11  10000U   57   M      26          2
368       63    4     8     6  11  10000U   57   M      40          2
369       63    5    12     6  11  10000U   57   M      49          2
370       63    6    16     6  11  10000U   57   M      47          2

[6 rows x 10 columns]

64
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
371       64    1     0     6  12  5000U   39   M      46          1
372       64    2     2     6  12  5000U   39   M      44          1
373       64    3     4     6  12  5000U   39   M      47          1
374       64    4     8     6  12  5000U   39   M      50          1
375       64    5    12     6  12  5000U   39   M      46          1
376       64    6    16     6  12  5000U   39   M      51          1

[6 rows x 10 columns]

65
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
377       65    1     0     6  13  10000U   67   M      34          2
378       65    2     2     6  13  10000U   67   M      31          2
379       65    3     4     6  13  10000U   67   M      25          2

[3 rows x 10 columns]

66
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
380       66    1     0     6  14  5000U   39   F      57          1
381       66    2     2     6  14  5000U   39   F      48          1
382       66    3     4     6  14  5000U   39   F      50          1
383       66    4     8     6  14  5000U   39   F      50          1
384       66    5    12     6  14  5000U   39   F      50          1
385       66    6    16     6  14  5000U   39   F      49          1

[6 rows x 10 columns]

67
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
386       67    1     0     6  15  Placebo   69   M      41          0
387       67    2     2     6  15  Placebo   69   M      40          0
388       67    3     4     6  15  Placebo   69   M      42          0
389       67    4     8     6  15  Placebo   69   M      38          0
390       67    5    12     6  15  Placebo   69   M      50          0
391       67    6    16     6  15  Placebo   69   M      56          0

[6 rows x 10 columns]

68
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
392       68    1     0     7   1  5000U   54   F      49          1
393       68    2     2     7   1  5000U   54   F      25          1
394       68    3     4     7   1  5000U   54   F      30          1
395       68    4     8     7   1  5000U   54   F      41          1
396       68    5    12     7   1  5000U   54   F      41          1
397       68    6    16     7   1  5000U   54   F      31          1

[6 rows x 10 columns]

69
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
398       69    1     0     7   2  Placebo   67   F      42          0
399       69    2     2     7   2  Placebo   67   F      30          0
400       69    3     4     7   2  Placebo   67   F      40          0
401       69    4     8     7   2  Placebo   67   F      43          0
402       69    5    12     7   2  Placebo   67   F      36          0
403       69    6    16     7   2  Placebo   67   F      45          0

[6 rows x 10 columns]

70
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
404       70    1     0     7   3  10000U   58   F      31          2
405       70    2     2     7   3  10000U   58   F      18          2
406       70    3     4     7   3  10000U   58   F      23          2
407       70    4     8     7   3  10000U   58   F      26          2
408       70    5    12     7   3  10000U   58   F      33          2
409       70    6    16     7   3  10000U   58   F      41          2

[6 rows x 10 columns]

71
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
410       71    1     0     7   4  Placebo   72   F      50          0
411       71    2     2     7   4  Placebo   72   F      27          0
412       71    3     4     7   4  Placebo   72   F      43          0
413       71    4     8     7   4  Placebo   72   F      32          0
414       71    5    12     7   4  Placebo   72   F      40          0
415       71    6    16     7   4  Placebo   72   F      47          0

[6 rows x 10 columns]

72
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
416       72    1     0     7   5  10000U   65   F      35          2
417       72    2     2     7   5  10000U   65   F      24          2
418       72    3     4     7   5  10000U   65   F      34          2
419       72    4     8     7   5  10000U   65   F      28          2
420       72    5    12     7   5  10000U   65   F      34          2
421       72    6    16     7   5  10000U   65   F      28          2

[6 rows x 10 columns]

73
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
422       73    1     0     7   6  5000U   68   F      38          1
423       73    2     2     7   6  5000U   68   F      25          1
424       73    3     4     7   6  5000U   68   F      21          1
425       73    4     8     7   6  5000U   68   F      33          1
426       73    5    12     7   6  5000U   68   F      42          1
427       73    6    16     7   6  5000U   68   F      53          1

[6 rows x 10 columns]

74
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
428       74    1     0     7   7  10000U   75   F      53          2
429       74    2     2     7   7  10000U   75   F      40          2
430       74    3     4     7   7  10000U   75   F      38          2
431       74    4     8     7   7  10000U   75   F      44          2
432       74    5    12     7   7  10000U   75   F      47          2
433       74    6    16     7   7  10000U   75   F      53          2

[6 rows x 10 columns]

75
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
434       75    1     0     7   8  Placebo   26   F      42          0
435       75    2     2     7   8  Placebo   26   F      48          0
436       75    3     4     7   8  Placebo   26   F      26          0
437       75    4     8     7   8  Placebo   26   F      37          0
438       75    5    12     7   8  Placebo   26   F      37          0
439       75    6    16     7   8  Placebo   26   F      43          0

[6 rows x 10 columns]

76
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
440       76    1     0     7   9  5000U   36   F      53          1
441       76    2     2     7   9  5000U   36   F      45          1
442       76    3     4     7   9  5000U   36   F      52          1
443       76    4     8     7   9  5000U   36   F      51          1
444       76    5    12     7   9  5000U   36   F      52          1
445       76    6    16     7   9  5000U   36   F      53          1

[6 rows x 10 columns]

77
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
446       77    1     0     7  10  10000U   72   M      46          2
447       77    2     2     7  10  10000U   72   M      47          2
448       77    3     4     7  10  10000U   72   M      45          2
449       77    4     8     7  10  10000U   72   M      45          2
450       77    5    12     7  10  10000U   72   M      50          2
451       77    6    16     7  10  10000U   72   M      52          2

[6 rows x 10 columns]

78
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
452       78    1     0     7  11  Placebo   54   F      50          0
453       78    2     2     7  11  Placebo   54   F      42          0
454       78    3     4     7  11  Placebo   54   F      52          0
455       78    4     8     7  11  Placebo   54   F      60          0
456       78    5    12     7  11  Placebo   54   F      54          0
457       78    6    16     7  11  Placebo   54   F      59          0

[6 rows x 10 columns]

79
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
458       79    1     0     7  12  5000U   64   F      43          1
459       79    2     2     7  12  5000U   64   F      24          1
460       79    3     4     7  12  5000U   64   F      17          1
461       79    4     8     7  12  5000U   64   F      37          1
462       79    5    12     7  12  5000U   64   F      36          1
463       79    6    16     7  12  5000U   64   F      38          1

[6 rows x 10 columns]

80
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
464       80    1     0     8   1  Placebo   39   F      46          0
465       80    2     2     8   1  Placebo   39   F      39          0
466       80    3     4     8   1  Placebo   39   F      25          0
467       80    4     8     8   1  Placebo   39   F      15          0
468       80    5    12     8   1  Placebo   39   F      21          0
469       80    6    16     8   1  Placebo   39   F      25          0

[6 rows x 10 columns]

81
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
470       81    1     0     8   2  10000U   54   M      41          2
471       81    2     2     8   2  10000U   54   M      30          2
472       81    3     4     8   2  10000U   54   M      44          2
473       81    4     8     8   2  10000U   54   M      46          2
474       81    5    12     8   2  10000U   54   M      46          2
475       81    6    16     8   2  10000U   54   M      44          2

[6 rows x 10 columns]

82
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
476       82    1     0     8   3  5000U   48   M      33          1
477       82    2     2     8   3  5000U   48   M      27          1
478       82    3     4     8   3  5000U   48   M      25          1
479       82    4     8     8   3  5000U   48   M      30          1
480       82    5    12     8   3  5000U   48   M      28          1
481       82    6    16     8   3  5000U   48   M      30          1

[6 rows x 10 columns]

83
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
482       83    1     0     8   4  5000U   83   F      36          1
483       83    2     2     8   4  5000U   83   F      15          1
484       83    3     4     8   4  5000U   83   F      16          1
485       83    4     8     8   4  5000U   83   F      17          1
486       83    5    12     8   4  5000U   83   F      22          1
487       83    6    16     8   4  5000U   83   F      41          1

[6 rows x 10 columns]

84
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
488       84    1     0     8   5  10000U   74   M      33          2
489       84    2     2     8   5  10000U   74   M      32          2
490       84    3     4     8   5  10000U   74   M      31          2
491       84    4     8     8   5  10000U   74   M      27          2
492       84    5    12     8   5  10000U   74   M      49          2
493       84    6    16     8   5  10000U   74   M      60          2

[6 rows x 10 columns]

85
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
494       85    1     0     8   6  Placebo   41   M      37          0

[1 rows x 10 columns]

86
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
495       86    1     0     8   7  10000U   65   F      24          2
496       86    2     2     8   7  10000U   65   F      29          2
497       86    3     4     8   7  10000U   65   F      18          2
498       86    4     8     8   7  10000U   65   F      20          2
499       86    5    12     8   7  10000U   65   F      25          2
500       86    6    16     8   7  10000U   65   F      41          2

[6 rows x 10 columns]

87
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
501       87    1     0     8   8  5000U   79   M      42          1
502       87    2     2     8   8  5000U   79   M      23          1
503       87    3     4     8   8  5000U   79   M      30          1
504       87    4     8     8   8  5000U   79   M      36          1
505       87    5    12     8   8  5000U   79   M      41          1
506       87    6    16     8   8  5000U   79   M      43          1

[6 rows x 10 columns]

88
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
507       88    1     0     8   9  Placebo   63   M      30          0
508       88    2     2     8   9  Placebo   63   M      22          0
509       88    3     4     8   9  Placebo   63   M      21          0
510       88    4     8     8   9  Placebo   63   M      25          0
511       88    5    12     8   9  Placebo   63   M      26          0
512       88    6    16     8   9  Placebo   63   M      33          0

[6 rows x 10 columns]

89
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
513       89    1     0     8  10  Placebo   63   F      42          0
514       89    2     2     8  10  Placebo   63   F      46          0
515       89    3     4     8  10  Placebo   63   F      41          0
516       89    4     8     8  10  Placebo   63   F      43          0
517       89    5    12     8  10  Placebo   63   F      49          0
518       89    6    16     8  10  Placebo   63   F      54          0

[6 rows x 10 columns]

90
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
519       90    1     0     8  11  10000U   34   F      49          2
520       90    2     2     8  11  10000U   34   F      25          2
521       90    3     4     8  11  10000U   34   F      30          2
522       90    4     8     8  11  10000U   34   F      49          2
523       90    5    12     8  11  10000U   34   F      55          2
524       90    6    16     8  11  10000U   34   F      58          2

[6 rows x 10 columns]

91
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
525       91    1     0     8  12  5000U   42   M      58          1
526       91    2     2     8  12  5000U   42   M      46          1
527       91    3     4     8  12  5000U   42   M      46          1
528       91    4     8     8  12  5000U   42   M      50          1
529       91    5    12     8  12  5000U   42   M      56          1
530       91    6    16     8  12  5000U   42   M      60          1

[6 rows x 10 columns]

92
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
531       92    1     0     8  13  Placebo   57   M      26          0
532       92    2     2     8  13  Placebo   57   M      26          0
533       92    3     4     8  13  Placebo   57   M      27          0
534       92    4     8     8  13  Placebo   57   M      22          0
535       92    5    12     8  13  Placebo   57   M      38          0
536       92    6    16     8  13  Placebo   57   M      35          0

[6 rows x 10 columns]

93
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
537       93    1     0     8  14  5000U   68   M      37          1
538       93    3     4     8  14  5000U   68   M      23          1
539       93    4     8     8  14  5000U   68   M      18          1
540       93    5    12     8  14  5000U   68   M      34          1
541       93    6    16     8  14  5000U   68   M      36          1

[5 rows x 10 columns]

94
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
542       94    1     0     8  15  10000U   51   M      40          2
543       94    2     2     8  15  10000U   51   M      24          2
544       94    3     4     8  15  10000U   51   M      25          2
545       94    4     8     8  15  10000U   51   M      37          2
546       94    6    16     8  15  10000U   51   M      38          2

[5 rows x 10 columns]

95
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
547       95    1     0     8  16  5000U   51   F      33          1
548       95    2     2     8  16  5000U   51   F      10          1
549       95    3     4     8  16  5000U   51   F      13          1
550       95    4     8     8  16  5000U   51   F      16          1
551       95    5    12     8  16  5000U   51   F      32          1
552       95    6    16     8  16  5000U   51   F      16          1

[6 rows x 10 columns]

96
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
553       96    1     0     8  17  10000U   61   F      41          2
554       96    2     2     8  17  10000U   61   F      50          2
555       96    3     4     8  17  10000U   61   F      22          2
556       96    4     8     8  17  10000U   61   F      28          2
557       96    5    12     8  17  10000U   61   F      34          2
558       96    6    16     8  17  10000U   61   F      36          2

[6 rows x 10 columns]

97
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
559       97    1     0     8  18  Placebo   42   M      46          0
560       97    3     4     8  18  Placebo   42   M      41          0
561       97    4     8     8  18  Placebo   42   M      41          0
562       97    5    12     8  18  Placebo   42   M      58          0
563       97    6    16     8  18  Placebo   42   M      53          0

[5 rows x 10 columns]

98
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
564       98    1     0     8  19  10000U   73   F      40          2
565       98    2     2     8  19  10000U   73   F      28          2
566       98    3     4     8  19  10000U   73   F      29          2
567       98    4     8     8  19  10000U   73   F      30          2
568       98    5    12     8  19  10000U   73   F      37          2
569       98    6    16     8  19  10000U   73   F      44          2

[6 rows x 10 columns]

99
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
570       99    1     0     9   1  10000U   57   M      40          2
571       99    2     2     9   1  10000U   57   M      16          2
572       99    3     4     9   1  10000U   57   M      18          2
573       99    4     8     9   1  10000U   57   M      25          2
574       99    5    12     9   1  10000U   57   M      33          2
575       99    6    16     9   1  10000U   57   M      48          2

[6 rows x 10 columns]

100
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
576      100    1     0     9   2  Placebo   59   M      61          0
577      100    2     2     9   2  Placebo   59   M      52          0
578      100    3     4     9   2  Placebo   59   M      61          0
579      100    4     8     9   2  Placebo   59   M      68          0
580      100    5    12     9   2  Placebo   59   M      59          0
581      100    6    16     9   2  Placebo   59   M      71          0

[6 rows x 10 columns]

101
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
582      101    1     0     9   3  5000U   57   M      35          1
583      101    2     2     9   3  5000U   57   M      21          1
584      101    3     4     9   3  5000U   57   M      29          1
585      101    4     8     9   3  5000U   57   M      30          1
586      101    5    12     9   3  5000U   57   M      35          1
587      101    6    16     9   3  5000U   57   M      48          1

[6 rows x 10 columns]

102
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
588      102    1     0     9   4  Placebo   68   F      58          0
589      102    2     2     9   4  Placebo   68   F      38          0
590      102    3     4     9   4  Placebo   68   F      50          0
591      102    4     8     9   4  Placebo   68   F      53          0
592      102    5    12     9   4  Placebo   68   F      47          0
593      102    6    16     9   4  Placebo   68   F      59          0

[6 rows x 10 columns]

103
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
594      103    1     0     9   5  5000U   55   F      49          1
595      103    2     2     9   5  5000U   55   F      45          1
596      103    3     4     9   5  5000U   55   F      36          1
597      103    5    12     9   5  5000U   55   F      40          1
598      103    6    16     9   5  5000U   55   F      52          1

[5 rows x 10 columns]

104
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
599      104    1     0     9   6  10000U   46   F      52          2
600      104    2     2     9   6  10000U   46   F      46          2
601      104    3     4     9   6  10000U   46   F      36          2
602      104    5    12     9   6  10000U   46   F      45          2
603      104    6    16     9   6  10000U   46   F      54          2

[5 rows x 10 columns]

105
     patient  obs  week  site  id    treat  age sex  twstrs  treatment
604      105    1     0     9   7  Placebo   79   F      45          0
605      105    2     2     9   7  Placebo   79   F      46          0
606      105    3     4     9   7  Placebo   79   F      33          0
607      105    4     8     9   7  Placebo   79   F      44          0
608      105    5    12     9   7  Placebo   79   F      46          0
609      105    6    16     9   7  Placebo   79   F      48          0

[6 rows x 10 columns]

106
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
610      106    1     0     9   8  5000U   43   M      67          1
611      106    2     2     9   8  5000U   43   M      63          1
612      106    3     4     9   8  5000U   43   M      71          1
613      106    4     8     9   8  5000U   43   M      66          1
614      106    5    12     9   8  5000U   43   M      68          1
615      106    6    16     9   8  5000U   43   M      71          1

[6 rows x 10 columns]

107
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
616      107    1     0     9   9  10000U   50   M      57          2
617      107    3     4     9   9  10000U   50   M      36          2
618      107    4     8     9   9  10000U   50   M      23          2
619      107    6    16     9   9  10000U   50   M      52          2

[4 rows x 10 columns]

108
     patient  obs  week  site  id   treat  age sex  twstrs  treatment
620      108    1     0     9  10  10000U   39   F      63          2
621      108    2     2     9  10  10000U   39   F      51          2
622      108    3     4     9  10  10000U   39   F      46          2
623      108    4     8     9  10  10000U   39   F      50          2
624      108    5    12     9  10  10000U   39   F      50          2
625      108    6    16     9  10  10000U   39   F      54          2

[6 rows x 10 columns]

109
     patient  obs  week  site  id  treat  age sex  twstrs  treatment
626      109    1     0     9  11  5000U   57   M      53          1
627      109    2     2     9  11  5000U   57   M      38          1
628      109    4     8     9  11  5000U   57   M      33          1
629      109    5    12     9  11  5000U   57   M      36          1
630      109    6    16     9  11  5000U   57   M      51          1

[5 rows x 10 columns]

A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.

For example, we may want to aggregate our data with with some function.

*(Source: "Python for Data Analysis", p.251)*

We can aggregate in Pandas using the aggregate (or agg, for short) method:


In [204]:
cdystonia_grouped.agg(np.mean).head()


Out[204]:
         patient  obs  week  site  id  age     twstrs  treatment
patient                                                         
1              1  3.5   7.0     1   1   65  33.000000          1
2              2  3.5   7.0     1   2   70  47.666667          2
3              3  3.5   7.0     1   3   64  30.500000          1
4              4  2.5   3.5     1   4   59  60.000000          0
5              5  3.5   7.0     1   5   76  46.166667          2

[5 rows x 8 columns]

Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.

Some aggregation functions are so common that Pandas has a convenience method for them, such as mean:


In [205]:
cdystonia_grouped.mean().head()


Out[205]:
         patient  obs  week  site  id  age     twstrs  treatment
patient                                                         
1              1  3.5   7.0     1   1   65  33.000000          1
2              2  3.5   7.0     1   2   70  47.666667          2
3              3  3.5   7.0     1   3   64  30.500000          1
4              4  2.5   3.5     1   4   59  60.000000          0
5              5  3.5   7.0     1   5   76  46.166667          2

[5 rows x 8 columns]

The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:


In [206]:
cdystonia_grouped.mean().add_suffix('_mean').head()


Out[206]:
         patient_mean  obs_mean  week_mean  site_mean  id_mean  age_mean  \
patient                                                                    
1                   1       3.5        7.0          1        1        65   
2                   2       3.5        7.0          1        2        70   
3                   3       3.5        7.0          1        3        64   
4                   4       2.5        3.5          1        4        59   
5                   5       3.5        7.0          1        5        76   

         twstrs_mean  treatment_mean  
patient                               
1          33.000000               1  
2          47.666667               2  
3          30.500000               1  
4          60.000000               0  
5          46.166667               2  

[5 rows x 8 columns]

In [207]:
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)


Out[207]:
patient
1          34.0
2          50.5
3          30.5
4          61.5
5          48.5
6          48.0
7          42.0
8          32.5
...
102        51.5
103        45.0
104        46.0
105        45.5
106        67.5
107        44.0
108        50.5
109        38.0
Length: 109, dtype: float64

If we wish, we can easily aggregate according to multiple keys:


In [208]:
cdystonia.groupby(['week','site']).mean().head()


Out[208]:
           patient  obs   id        age     twstrs  treatment
week site                                                    
0    1         6.5    1  6.5  59.000000  43.083333   1.000000
     2        19.5    1  7.5  53.928571  51.857143   0.928571
     3        32.5    1  6.5  51.500000  38.750000   1.000000
     4        42.5    1  4.5  59.250000  48.125000   1.000000
     5        49.5    1  3.5  51.833333  49.333333   1.000000

[5 rows x 6 columns]

Alternately, we can transform the data, using a function of our choice with the transform method:


In [209]:
normalize = lambda x: (x - x.mean())/x.std()

cdystonia_grouped.transform(normalize).head()


Out[209]:
   patient       obs      week  site  id  age    twstrs  treatment
0      NaN -1.336306 -1.135550   NaN NaN  NaN -0.181369        NaN
1      NaN -0.801784 -0.811107   NaN NaN  NaN -0.544107        NaN
2      NaN -0.267261 -0.486664   NaN NaN  NaN -1.632322        NaN
3      NaN  0.267261  0.162221   NaN NaN  NaN  0.725476        NaN
4      NaN  0.801784  0.811107   NaN NaN  NaN  1.088214        NaN

[5 rows x 8 columns]

It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:


In [210]:
cdystonia_grouped['twstrs'].mean().head()


Out[210]:
patient
1          33.000000
2          47.666667
3          30.500000
4          60.000000
5          46.166667
Name: twstrs, dtype: float64

If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:


In [211]:
chunks = dict(list(cdystonia_grouped))
chunks[4]


Out[211]:
    patient  obs  week  site  id    treat  age sex  twstrs  treatment
18        4    1     0     1   4  Placebo   59   F      53          0
19        4    2     2     1   4  Placebo   59   F      61          0
20        4    3     4     1   4  Placebo   59   F      64          0
21        4    4     8     1   4  Placebo   59   F      62          0

[4 rows x 10 columns]

By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by type this way:


In [212]:
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))


Out[212]:
{dtype('int64'):     patient  obs  week  site  id  age  twstrs  treatment
 0         1    1     0     1   1   65      32          1
 1         1    2     2     1   1   65      30          1
 2         1    3     4     1   1   65      24          1
 3         1    4     8     1   1   65      37          1
 4         1    5    12     1   1   65      39          1
 5         1    6    16     1   1   65      36          1
 6         2    1     0     1   2   70      60          2
 7         2    2     2     1   2   70      26          2
 8         2    3     4     1   2   70      27          2
 9         2    4     8     1   2   70      41          2
 10        2    5    12     1   2   70      65          2
 11        2    6    16     1   2   70      67          2
 12        3    1     0     1   3   64      44          1
 13        3    2     2     1   3   64      20          1
 14        3    3     4     1   3   64      23          1
 15        3    4     8     1   3   64      26          1
 16        3    5    12     1   3   64      35          1
 17        3    6    16     1   3   64      35          1
 18        4    1     0     1   4   59      53          0
 19        4    2     2     1   4   59      61          0
         ...  ...   ...   ... ...  ...     ...        ...
 
 [631 rows x 8 columns], dtype('O'):       treat sex
 0     5000U   F
 1     5000U   F
 2     5000U   F
 3     5000U   F
 4     5000U   F
 5     5000U   F
 6    10000U   F
 7    10000U   F
 8    10000U   F
 9    10000U   F
 10   10000U   F
 11   10000U   F
 12    5000U   F
 13    5000U   F
 14    5000U   F
 15    5000U   F
 16    5000U   F
 17    5000U   F
 18  Placebo   F
 19  Placebo   F
         ... ...
 
 [631 rows x 2 columns]}

Apply

We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.

The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.


In [213]:
def top(df, column, n=5):
    return df.sort_index(by=column, ascending=False)[:n]

To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:


In [214]:
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments


Out[214]:
                                                     names  seg_length
mmsi                                                                  
1    6   Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...        76.0
     5   Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...        17.4
     7   Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...        13.7
9    15                         000000009/Raven/Shearwater        47.2
     14                         000000009/Raven/Shearwater        31.4
     13                         000000009/Raven/Shearwater        19.3
21   16                                      Us Gov Vessel        48.7
     25                                      Us Gov Vessel        25.3
     30                                      Us Gov Vessel        21.7
74   35                                  Mcfaul/Sarah Bell         7.4
     34                                  Mcfaul/Sarah Bell         1.4
103  37           Ron G/Us Navy Warship 103/Us Warship 103        87.5
     41           Ron G/Us Navy Warship 103/Us Warship 103        62.6
     43           Ron G/Us Navy Warship 103/Us Warship 103        59.1
310  51                                           Arabella        77.4
     58                                           Arabella        30.7
     49                                           Arabella        30.4
3011 74                                         Charleston       121.6
     69                                         Charleston        89.7
     77                                         Charleston        59.7
                                                       ...         ...

[29464 rows x 2 columns]

Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.


In [215]:
top3segments.head(20)


Out[215]:
                                                     names  seg_length
mmsi                                                                  
1    6   Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...        76.0
     5   Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...        17.4
     7   Bil Holman Dredge/Dredge Capt Frank/Emo/Offsho...        13.7
9    15                         000000009/Raven/Shearwater        47.2
     14                         000000009/Raven/Shearwater        31.4
     13                         000000009/Raven/Shearwater        19.3
21   16                                      Us Gov Vessel        48.7
     25                                      Us Gov Vessel        25.3
     30                                      Us Gov Vessel        21.7
74   35                                  Mcfaul/Sarah Bell         7.4
     34                                  Mcfaul/Sarah Bell         1.4
103  37           Ron G/Us Navy Warship 103/Us Warship 103        87.5
     41           Ron G/Us Navy Warship 103/Us Warship 103        62.6
     43           Ron G/Us Navy Warship 103/Us Warship 103        59.1
310  51                                           Arabella        77.4
     58                                           Arabella        30.7
     49                                           Arabella        30.4
3011 74                                         Charleston       121.6
     69                                         Charleston        89.7
     77                                         Charleston        59.7

[20 rows x 2 columns]